Test Report: QEMU_macOS 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 38.99
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.25
22 TestOffline 10.01
33 TestAddons/parallel/Registry 71.3
45 TestCertOptions 10.12
46 TestCertExpiration 195.36
47 TestDockerFlags 10.16
48 TestForceSystemdFlag 10.06
49 TestForceSystemdEnv 11.47
94 TestFunctional/parallel/ServiceCmdConnect 35.61
166 TestMultiControlPlane/serial/StopSecondaryNode 64.12
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.93
168 TestMultiControlPlane/serial/RestartSecondaryNode 87.04
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.07
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.01
183 TestJSONOutput/start/Command 9.84
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.05
212 TestMinikubeProfile 10.21
215 TestMountStart/serial/StartWithMountFirst 10.53
218 TestMultiNode/serial/FreshStart2Nodes 9.83
219 TestMultiNode/serial/DeployApp2Nodes 97.78
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.08
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 52.77
227 TestMultiNode/serial/RestartKeepsNodes 9.05
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 2.15
230 TestMultiNode/serial/RestartMultiNode 5.26
231 TestMultiNode/serial/ValidateNameConflict 20.05
235 TestPreload 10.06
237 TestScheduledStopUnix 10.09
238 TestSkaffold 16.18
241 TestRunningBinaryUpgrade 622.66
243 TestKubernetesUpgrade 18.92
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.49
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.22
259 TestStoppedBinaryUpgrade/Upgrade 581.02
261 TestPause/serial/Start 9.96
271 TestNoKubernetes/serial/StartWithK8s 9.86
272 TestNoKubernetes/serial/StartWithStopK8s 5.82
273 TestNoKubernetes/serial/Start 5.84
277 TestNoKubernetes/serial/StartNoArgs 5.88
279 TestNetworkPlugins/group/auto/Start 9.98
280 TestNetworkPlugins/group/calico/Start 9.9
281 TestNetworkPlugins/group/custom-flannel/Start 9.88
282 TestNetworkPlugins/group/false/Start 9.81
283 TestNetworkPlugins/group/kindnet/Start 9.85
284 TestNetworkPlugins/group/flannel/Start 9.87
285 TestNetworkPlugins/group/enable-default-cni/Start 9.76
286 TestNetworkPlugins/group/bridge/Start 9.88
288 TestNetworkPlugins/group/kubenet/Start 9.94
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/no-preload/serial/FirstStart 10.22
297 TestStartStop/group/old-k8s-version/serial/SecondStart 7.25
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.1
301 TestStartStop/group/old-k8s-version/serial/Pause 0.12
303 TestStartStop/group/embed-certs/serial/FirstStart 11.79
304 TestStartStop/group/no-preload/serial/DeployApp 0.1
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.15
308 TestStartStop/group/no-preload/serial/SecondStart 7.31
309 TestStartStop/group/embed-certs/serial/DeployApp 0.1
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
314 TestStartStop/group/no-preload/serial/Pause 0.1
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.12
319 TestStartStop/group/embed-certs/serial/SecondStart 5.89
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
323 TestStartStop/group/embed-certs/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/FirstStart 11.71
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.27
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/SecondStart 5.26
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (38.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (38.992124334s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4822d2aa-1c99-4321-97d0-56bbebcf68da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-388000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"412f6036-5f4b-4a27-97e8-2360084c2ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"9624b5c0-b93c-4600-ab5c-a6f6266c16a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig"}}
	{"specversion":"1.0","id":"7c7e832f-c6c2-4c08-a2cd-3e4967e04b78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ed4f200d-fedc-45cf-9793-f44df09b282a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d36f9873-c8c7-4708-84fd-f6a0a4d17968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube"}}
	{"specversion":"1.0","id":"ac0e8a81-e021-4ce6-987f-ba4d984ed7b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1a42e9c0-4078-4a26-88b2-acf030b4601e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"14dc889d-8909-4ec1-9bb3-76fda22212f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3ed651e5-647a-4220-98b7-64ff91956523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1995e589-669c-4778-a4fc-a21ce7616f21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-388000\" primary control-plane node in \"download-only-388000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"90a968b8-581d-4388-88d5-7bbf6bf4562d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d33588a-bde1-42d1-95d9-5f8b2b37e338","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0] Decompressors:map[bz2:0x14000811160 gz:0x14000811168 tar:0x14000811110 tar.bz2:0x14000811120 tar.gz:0x14000811130 tar.xz:0x14000811140 tar.zst:0x14000811150 tbz2:0x14000811120 tgz:0x14
000811130 txz:0x14000811140 tzst:0x14000811150 xz:0x14000811170 zip:0x14000811180 zst:0x14000811178] Getters:map[file:0x140003e87f0 http:0x140009040a0 https:0x140009040f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"35fb8fa6-166f-4a4f-8936-21cb79b9ebe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:20:04.345621    1930 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:20:04.346035    1930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:04.346040    1930 out.go:358] Setting ErrFile to fd 2...
	I0930 03:20:04.346042    1930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:04.346233    1930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	W0930 03:20:04.346362    1930 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19734-1406/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19734-1406/.minikube/config/config.json: no such file or directory
	I0930 03:20:04.347863    1930 out.go:352] Setting JSON to true
	I0930 03:20:04.365094    1930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1167,"bootTime":1727690437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:20:04.365205    1930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:20:04.370452    1930 out.go:97] [download-only-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:20:04.370625    1930 notify.go:220] Checking for updates...
	W0930 03:20:04.370656    1930 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 03:20:04.373364    1930 out.go:169] MINIKUBE_LOCATION=19734
	I0930 03:20:04.376356    1930 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:20:04.381386    1930 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:20:04.382727    1930 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:20:04.386391    1930 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	W0930 03:20:04.392379    1930 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 03:20:04.392625    1930 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:20:04.397312    1930 out.go:97] Using the qemu2 driver based on user configuration
	I0930 03:20:04.397328    1930 start.go:297] selected driver: qemu2
	I0930 03:20:04.397340    1930 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:20:04.397397    1930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:20:04.400377    1930 out.go:169] Automatically selected the socket_vmnet network
	I0930 03:20:04.405963    1930 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0930 03:20:04.406068    1930 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 03:20:04.406123    1930 cni.go:84] Creating CNI manager for ""
	I0930 03:20:04.406161    1930 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 03:20:04.406217    1930 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:20:04.411278    1930 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:20:04.415318    1930 out.go:97] Downloading VM boot image ...
	I0930 03:20:04.415333    1930 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0930 03:20:22.429216    1930 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0930 03:20:22.429241    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:22.703058    1930 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 03:20:22.703152    1930 cache.go:56] Caching tarball of preloaded images
	I0930 03:20:22.704002    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:22.711250    1930 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 03:20:22.711277    1930 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:23.298344    1930 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 03:20:41.454787    1930 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:41.454962    1930 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:42.151938    1930 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0930 03:20:42.152148    1930 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/download-only-388000/config.json ...
	I0930 03:20:42.152165    1930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/download-only-388000/config.json: {Name:mk7b46bb34296f896fabb72562914322ff711b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:20:42.152429    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:42.152624    1930 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0930 03:20:43.257296    1930 out.go:193] 
	W0930 03:20:43.262222    1930 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0] Decompressors:map[bz2:0x14000811160 gz:0x14000811168 tar:0x14000811110 tar.bz2:0x14000811120 tar.gz:0x14000811130 tar.xz:0x14000811140 tar.zst:0x14000811150 tbz2:0x14000811120 tgz:0x14000811130 txz:0x14000811140 tzst:0x14000811150 xz:0x14000811170 zip:0x14000811180 zst:0x14000811178] Getters:map[file:0x140003e87f0 http:0x140009040a0 https:0x140009040f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0930 03:20:43.262249    1930 out_reason.go:110] 
	W0930 03:20:43.270236    1930 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:20:43.274092    1930 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-388000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (38.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.25s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 03:21:02.096720    1929 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-109000 --alsologtostderr --binary-mirror http://127.0.0.1:49316 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-109000 --alsologtostderr --binary-mirror http://127.0.0.1:49316 --driver=qemu2 : exit status 40 (151.650833ms)

                                                
                                                
-- stdout --
	* [binary-mirror-109000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-109000" primary control-plane node in "binary-mirror-109000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:21:02.156992    2005 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:21:02.157112    2005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:21:02.157116    2005 out.go:358] Setting ErrFile to fd 2...
	I0930 03:21:02.157118    2005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:21:02.157309    2005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:21:02.158400    2005 out.go:352] Setting JSON to false
	I0930 03:21:02.174811    2005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1225,"bootTime":1727690437,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:21:02.174875    2005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:21:02.179154    2005 out.go:177] * [binary-mirror-109000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:21:02.186112    2005 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:21:02.186160    2005 notify.go:220] Checking for updates...
	I0930 03:21:02.190027    2005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:21:02.193090    2005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:21:02.196112    2005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:21:02.199079    2005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:21:02.202235    2005 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:21:02.206083    2005 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 03:21:02.213042    2005 start.go:297] selected driver: qemu2
	I0930 03:21:02.213047    2005 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:21:02.213092    2005 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:21:02.216059    2005 out.go:177] * Automatically selected the socket_vmnet network
	I0930 03:21:02.221193    2005 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0930 03:21:02.221295    2005 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 03:21:02.221313    2005 cni.go:84] Creating CNI manager for ""
	I0930 03:21:02.221340    2005 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 03:21:02.221347    2005 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 03:21:02.221386    2005 start.go:340] cluster config:
	{Name:binary-mirror-109000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49316 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:21:02.224901    2005 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:21:02.233967    2005 out.go:177] * Starting "binary-mirror-109000" primary control-plane node in "binary-mirror-109000" cluster
	I0930 03:21:02.238077    2005 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:21:02.238104    2005 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:21:02.238116    2005 cache.go:56] Caching tarball of preloaded images
	I0930 03:21:02.238198    2005 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:21:02.238204    2005 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:21:02.238477    2005 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/binary-mirror-109000/config.json ...
	I0930 03:21:02.238487    2005 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/binary-mirror-109000/config.json: {Name:mkf912887e2198ced5cba068e97897fe69fc1db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:02.238872    2005 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:21:02.238933    2005 download.go:107] Downloading: http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0930 03:21:02.257169    2005 out.go:201] 
	W0930 03:21:02.260105    2005 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0] Decompressors:map[bz2:0x14000528930 gz:0x14000528938 tar:0x140005288e0 tar.bz2:0x140005288f0 tar.gz:0x14000528900 tar.xz:0x14000528910 tar.zst:0x14000528920 tbz2:0x140005288f0 tgz:0x14000528900 txz:0x14000528910 tzst:0x14000528920 xz:0x14000528940 zip:0x14000528950 zst:0x14000528948] Getters:map[file:0x1400062f400 http:0x14000813130 https:0x140008131d0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49316/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0 0x1055156c0] Decompressors:map[bz2:0x14000528930 gz:0x14000528938 tar:0x140005288e0 tar.bz2:0x140005288f0 tar.gz:0x14000528900 tar.xz:0x14000528910 tar.zst:0x14000528920 tbz2:0x140005288f0 tgz:0x14000528900 txz:0x14000528910 tzst:0x14000528920 xz:0x14000528940 zip:0x14000528950 zst:0x14000528948] Getters:map[file:0x1400062f400 http:0x14000813130 https:0x140008131d0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0930 03:21:02.260113    2005 out.go:270] * 
	* 
	W0930 03:21:02.260647    2005 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:21:02.271058    2005 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-109000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49316" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-109000
--- FAIL: TestBinaryMirror (0.25s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-897000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-897000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.849586708s)

                                                
                                                
-- stdout --
	* [offline-docker-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-897000" primary control-plane node in "offline-docker-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:02:16.938994    4637 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:02:16.939140    4637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:16.939143    4637 out.go:358] Setting ErrFile to fd 2...
	I0930 04:02:16.939145    4637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:16.939278    4637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:02:16.940422    4637 out.go:352] Setting JSON to false
	I0930 04:02:16.957898    4637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3699,"bootTime":1727690437,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:02:16.957976    4637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:02:16.963879    4637 out.go:177] * [offline-docker-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:02:16.971810    4637 notify.go:220] Checking for updates...
	I0930 04:02:16.975839    4637 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:02:16.978866    4637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:02:16.981730    4637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:02:16.984793    4637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:02:16.987841    4637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:02:16.990785    4637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:02:16.994197    4637 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:16.994253    4637 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:02:16.997775    4637 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:02:17.004753    4637 start.go:297] selected driver: qemu2
	I0930 04:02:17.004764    4637 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:02:17.004773    4637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:02:17.006723    4637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:02:17.009933    4637 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:02:17.012831    4637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:02:17.012852    4637 cni.go:84] Creating CNI manager for ""
	I0930 04:02:17.012875    4637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:02:17.012879    4637 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:02:17.012914    4637 start.go:340] cluster config:
	{Name:offline-docker-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:02:17.016460    4637 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:02:17.023780    4637 out.go:177] * Starting "offline-docker-897000" primary control-plane node in "offline-docker-897000" cluster
	I0930 04:02:17.027710    4637 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:02:17.027744    4637 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:02:17.027753    4637 cache.go:56] Caching tarball of preloaded images
	I0930 04:02:17.027821    4637 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:02:17.027826    4637 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:02:17.027916    4637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/offline-docker-897000/config.json ...
	I0930 04:02:17.027926    4637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/offline-docker-897000/config.json: {Name:mk1ca06c1c5375fbaccf556f892c0d7abc9ed626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:02:17.028250    4637 start.go:360] acquireMachinesLock for offline-docker-897000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:17.028284    4637 start.go:364] duration metric: took 25.291µs to acquireMachinesLock for "offline-docker-897000"
	I0930 04:02:17.028294    4637 start.go:93] Provisioning new machine with config: &{Name:offline-docker-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:17.028323    4637 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:17.032802    4637 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:17.048778    4637 start.go:159] libmachine.API.Create for "offline-docker-897000" (driver="qemu2")
	I0930 04:02:17.048806    4637 client.go:168] LocalClient.Create starting
	I0930 04:02:17.048904    4637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:17.048938    4637 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:17.048948    4637 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:17.048987    4637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:17.049011    4637 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:17.049019    4637 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:17.049407    4637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:17.212437    4637 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:17.301162    4637 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:17.301175    4637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:17.301361    4637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:17.310890    4637 main.go:141] libmachine: STDOUT: 
	I0930 04:02:17.310916    4637 main.go:141] libmachine: STDERR: 
	I0930 04:02:17.310984    4637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2 +20000M
	I0930 04:02:17.319942    4637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:17.319960    4637 main.go:141] libmachine: STDERR: 
	I0930 04:02:17.319978    4637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:17.319984    4637 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:17.320003    4637 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:17.320047    4637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:28:16:73:7a:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:17.323060    4637 main.go:141] libmachine: STDOUT: 
	I0930 04:02:17.323082    4637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:17.323104    4637 client.go:171] duration metric: took 274.295416ms to LocalClient.Create
	I0930 04:02:19.325150    4637 start.go:128] duration metric: took 2.296846625s to createHost
	I0930 04:02:19.325181    4637 start.go:83] releasing machines lock for "offline-docker-897000", held for 2.296926s
	W0930 04:02:19.325191    4637 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:19.338821    4637 out.go:177] * Deleting "offline-docker-897000" in qemu2 ...
	W0930 04:02:19.355050    4637 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:19.355061    4637 start.go:729] Will try again in 5 seconds ...
	I0930 04:02:24.357078    4637 start.go:360] acquireMachinesLock for offline-docker-897000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:24.357164    4637 start.go:364] duration metric: took 59.542µs to acquireMachinesLock for "offline-docker-897000"
	I0930 04:02:24.357202    4637 start.go:93] Provisioning new machine with config: &{Name:offline-docker-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:24.357250    4637 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:24.370415    4637 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:24.386462    4637 start.go:159] libmachine.API.Create for "offline-docker-897000" (driver="qemu2")
	I0930 04:02:24.386493    4637 client.go:168] LocalClient.Create starting
	I0930 04:02:24.386563    4637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:24.386599    4637 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:24.386609    4637 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:24.386648    4637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:24.386671    4637 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:24.386679    4637 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:24.386974    4637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:24.547682    4637 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:24.677354    4637 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:24.677360    4637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:24.677567    4637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:24.687213    4637 main.go:141] libmachine: STDOUT: 
	I0930 04:02:24.687232    4637 main.go:141] libmachine: STDERR: 
	I0930 04:02:24.687291    4637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2 +20000M
	I0930 04:02:24.695068    4637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:24.695085    4637 main.go:141] libmachine: STDERR: 
	I0930 04:02:24.695096    4637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:24.695101    4637 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:24.695111    4637 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:24.695148    4637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7f:e0:0e:43:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/offline-docker-897000/disk.qcow2
	I0930 04:02:24.696740    4637 main.go:141] libmachine: STDOUT: 
	I0930 04:02:24.696753    4637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:24.696764    4637 client.go:171] duration metric: took 310.272042ms to LocalClient.Create
	I0930 04:02:26.698911    4637 start.go:128] duration metric: took 2.341667875s to createHost
	I0930 04:02:26.698990    4637 start.go:83] releasing machines lock for "offline-docker-897000", held for 2.341838167s
	W0930 04:02:26.699352    4637 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:26.724033    4637 out.go:201] 
	W0930 04:02:26.727089    4637 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:02:26.727122    4637 out.go:270] * 
	* 
	W0930 04:02:26.728949    4637 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:02:26.744059    4637 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-897000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-30 04:02:26.76062 -0700 PDT m=+2542.473816751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-897000 -n offline-docker-897000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-897000 -n offline-docker-897000: exit status 7 (66.826417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-897000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-897000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-897000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.069666ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-j6ss4" [230e32a5-8b5f-413f-b994-093070028d06] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009330792s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tkz2n" [151b7d8c-f9bc-4089-a54a-897445c55163] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010689292s
addons_test.go:338: (dbg) Run:  kubectl --context addons-584000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-584000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-584000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.061478375s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-584000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 ip
2024/09/30 03:35:08 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-584000 -n addons-584000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT |                     |
	|         | -p download-only-388000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT | 30 Sep 24 03:20 PDT |
	| delete  | -p download-only-388000                                                                     | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT | 30 Sep 24 03:20 PDT |
	| start   | -o=json --download-only                                                                     | download-only-691000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT |                     |
	|         | -p download-only-691000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:21 PDT |
	| delete  | -p download-only-691000                                                                     | download-only-691000 | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:21 PDT |
	| delete  | -p download-only-388000                                                                     | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:21 PDT |
	| delete  | -p download-only-691000                                                                     | download-only-691000 | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:21 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-109000 | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT |                     |
	|         | binary-mirror-109000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49316                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-109000                                                                     | binary-mirror-109000 | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:21 PDT |
	| addons  | enable dashboard -p                                                                         | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT |                     |
	|         | addons-584000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT |                     |
	|         | addons-584000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-584000 --wait=true                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:21 PDT | 30 Sep 24 03:25 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-584000 addons disable                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:25 PDT | 30 Sep 24 03:25 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:33 PDT | 30 Sep 24 03:33 PDT |
	|         | -p addons-584000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-584000 addons disable                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-584000 addons disable                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | -p addons-584000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-584000 ssh cat                                                                       | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | /opt/local-path-provisioner/pvc-7a8edbd9-cb85-4491-8c48-da2806ec0d22_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-584000 addons disable                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | addons-584000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-584000 addons                                                                        | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:34 PDT | 30 Sep 24 03:34 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-584000 ip                                                                            | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:35 PDT | 30 Sep 24 03:35 PDT |
	| addons  | addons-584000 addons disable                                                                | addons-584000        | jenkins | v1.34.0 | 30 Sep 24 03:35 PDT | 30 Sep 24 03:35 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 03:21:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 03:21:02.440730    2019 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:21:02.440850    2019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:21:02.440853    2019 out.go:358] Setting ErrFile to fd 2...
	I0930 03:21:02.440855    2019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:21:02.440999    2019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:21:02.442076    2019 out.go:352] Setting JSON to false
	I0930 03:21:02.458485    2019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1225,"bootTime":1727690437,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:21:02.458547    2019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:21:02.463069    2019 out.go:177] * [addons-584000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:21:02.470057    2019 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:21:02.470113    2019 notify.go:220] Checking for updates...
	I0930 03:21:02.477114    2019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:21:02.479979    2019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:21:02.483030    2019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:21:02.486078    2019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:21:02.489049    2019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:21:02.492262    2019 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:21:02.496063    2019 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 03:21:02.503000    2019 start.go:297] selected driver: qemu2
	I0930 03:21:02.503006    2019 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:21:02.503011    2019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:21:02.505139    2019 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:21:02.508060    2019 out.go:177] * Automatically selected the socket_vmnet network
	I0930 03:21:02.509576    2019 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 03:21:02.509603    2019 cni.go:84] Creating CNI manager for ""
	I0930 03:21:02.509625    2019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 03:21:02.509634    2019 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 03:21:02.509666    2019 start.go:340] cluster config:
	{Name:addons-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:21:02.513248    2019 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:21:02.521061    2019 out.go:177] * Starting "addons-584000" primary control-plane node in "addons-584000" cluster
	I0930 03:21:02.525058    2019 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:21:02.525075    2019 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:21:02.525083    2019 cache.go:56] Caching tarball of preloaded images
	I0930 03:21:02.525161    2019 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:21:02.525167    2019 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:21:02.525400    2019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/config.json ...
	I0930 03:21:02.525411    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/config.json: {Name:mk5cfd4a7aec5e2b853d8118a13500509171764f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:02.525822    2019 start.go:360] acquireMachinesLock for addons-584000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:21:02.525889    2019 start.go:364] duration metric: took 60.666µs to acquireMachinesLock for "addons-584000"
	I0930 03:21:02.525901    2019 start.go:93] Provisioning new machine with config: &{Name:addons-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 03:21:02.525940    2019 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 03:21:02.531115    2019 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0930 03:21:02.758161    2019 start.go:159] libmachine.API.Create for "addons-584000" (driver="qemu2")
	I0930 03:21:02.758208    2019 client.go:168] LocalClient.Create starting
	I0930 03:21:02.758347    2019 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 03:21:02.839454    2019 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 03:21:02.996082    2019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 03:21:03.338138    2019 main.go:141] libmachine: Creating SSH key...
	I0930 03:21:03.399232    2019 main.go:141] libmachine: Creating Disk image...
	I0930 03:21:03.399237    2019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 03:21:03.399464    2019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2
	I0930 03:21:03.418570    2019 main.go:141] libmachine: STDOUT: 
	I0930 03:21:03.418591    2019 main.go:141] libmachine: STDERR: 
	I0930 03:21:03.418656    2019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2 +20000M
	I0930 03:21:03.426731    2019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 03:21:03.426747    2019 main.go:141] libmachine: STDERR: 
	I0930 03:21:03.426760    2019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2
	I0930 03:21:03.426765    2019 main.go:141] libmachine: Starting QEMU VM...
	I0930 03:21:03.426802    2019 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:21:03.426828    2019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5a:2d:93:ca:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/disk.qcow2
	I0930 03:21:03.485626    2019 main.go:141] libmachine: STDOUT: 
	I0930 03:21:03.485655    2019 main.go:141] libmachine: STDERR: 
	I0930 03:21:03.485659    2019 main.go:141] libmachine: Attempt 0
	I0930 03:21:03.485673    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:03.485730    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:03.485748    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:05.487854    2019 main.go:141] libmachine: Attempt 1
	I0930 03:21:05.488009    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:05.488340    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:05.488391    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:07.490589    2019 main.go:141] libmachine: Attempt 2
	I0930 03:21:07.490699    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:07.491084    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:07.491144    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:09.493273    2019 main.go:141] libmachine: Attempt 3
	I0930 03:21:09.493304    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:09.493369    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:09.493383    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:11.495405    2019 main.go:141] libmachine: Attempt 4
	I0930 03:21:11.495432    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:11.495484    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:11.495494    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:13.497490    2019 main.go:141] libmachine: Attempt 5
	I0930 03:21:13.497499    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:13.497536    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:13.497543    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:15.499538    2019 main.go:141] libmachine: Attempt 6
	I0930 03:21:15.499556    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:15.499628    2019 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0930 03:21:15.499637    2019 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66fbccbd}
	I0930 03:21:17.500407    2019 main.go:141] libmachine: Attempt 7
	I0930 03:21:17.500489    2019 main.go:141] libmachine: Searching for 8a:5a:2d:93:ca:d in /var/db/dhcpd_leases ...
	I0930 03:21:17.500933    2019 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0930 03:21:17.500985    2019 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:8a:5a:2d:93:ca:d ID:1,8a:5a:2d:93:ca:d Lease:0x66fbcd1b}
	I0930 03:21:17.501001    2019 main.go:141] libmachine: Found match: 8a:5a:2d:93:ca:d
	I0930 03:21:17.501039    2019 main.go:141] libmachine: IP: 192.168.105.2
	I0930 03:21:17.501061    2019 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0930 03:21:20.511740    2019 machine.go:93] provisionDockerMachine start ...
	I0930 03:21:20.512785    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.513548    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.513561    2019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 03:21:20.563201    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 03:21:20.563211    2019 buildroot.go:166] provisioning hostname "addons-584000"
	I0930 03:21:20.563278    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.563394    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.563400    2019 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-584000 && echo "addons-584000" | sudo tee /etc/hostname
	I0930 03:21:20.616605    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-584000
	
	I0930 03:21:20.616659    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.616767    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.616775    2019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-584000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-584000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-584000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 03:21:20.662275    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 03:21:20.662286    2019 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19734-1406/.minikube CaCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19734-1406/.minikube}
	I0930 03:21:20.662299    2019 buildroot.go:174] setting up certificates
	I0930 03:21:20.662303    2019 provision.go:84] configureAuth start
	I0930 03:21:20.662308    2019 provision.go:143] copyHostCerts
	I0930 03:21:20.662381    2019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem (1078 bytes)
	I0930 03:21:20.662625    2019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem (1123 bytes)
	I0930 03:21:20.662725    2019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem (1675 bytes)
	I0930 03:21:20.662806    2019 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem org=jenkins.addons-584000 san=[127.0.0.1 192.168.105.2 addons-584000 localhost minikube]
	I0930 03:21:20.716628    2019 provision.go:177] copyRemoteCerts
	I0930 03:21:20.716679    2019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 03:21:20.716686    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:20.741143    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 03:21:20.749758    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 03:21:20.758054    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 03:21:20.766382    2019 provision.go:87] duration metric: took 104.064542ms to configureAuth
	I0930 03:21:20.766392    2019 buildroot.go:189] setting minikube options for container-runtime
	I0930 03:21:20.766509    2019 config.go:182] Loaded profile config "addons-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:21:20.766547    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.766631    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.766636    2019 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 03:21:20.810489    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0930 03:21:20.810499    2019 buildroot.go:70] root file system type: tmpfs
	I0930 03:21:20.810546    2019 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 03:21:20.810597    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.810701    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.810736    2019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 03:21:20.858418    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 03:21:20.858474    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:20.858593    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:20.858603    2019 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 03:21:22.229746    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0930 03:21:22.229763    2019 machine.go:96] duration metric: took 1.718062417s to provisionDockerMachine
	I0930 03:21:22.229770    2019 client.go:171] duration metric: took 19.472157s to LocalClient.Create
	I0930 03:21:22.229781    2019 start.go:167] duration metric: took 19.472226292s to libmachine.API.Create "addons-584000"
	I0930 03:21:22.229789    2019 start.go:293] postStartSetup for "addons-584000" (driver="qemu2")
	I0930 03:21:22.229795    2019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 03:21:22.229876    2019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 03:21:22.229889    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:22.254169    2019 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 03:21:22.255843    2019 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 03:21:22.255854    2019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/addons for local assets ...
	I0930 03:21:22.255969    2019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/files for local assets ...
	I0930 03:21:22.255999    2019 start.go:296] duration metric: took 26.208292ms for postStartSetup
	I0930 03:21:22.256426    2019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/config.json ...
	I0930 03:21:22.256632    2019 start.go:128] duration metric: took 19.731294792s to createHost
	I0930 03:21:22.256663    2019 main.go:141] libmachine: Using SSH client type: native
	I0930 03:21:22.256753    2019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103249c00] 0x10324c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0930 03:21:22.256758    2019 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 03:21:22.303099    2019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727691681.882020628
	
	I0930 03:21:22.303108    2019 fix.go:216] guest clock: 1727691681.882020628
	I0930 03:21:22.303113    2019 fix.go:229] Guest: 2024-09-30 03:21:21.882020628 -0700 PDT Remote: 2024-09-30 03:21:22.256635 -0700 PDT m=+19.835681251 (delta=-374.614372ms)
	I0930 03:21:22.303127    2019 fix.go:200] guest clock delta is within tolerance: -374.614372ms
	I0930 03:21:22.303130    2019 start.go:83] releasing machines lock for "addons-584000", held for 19.777844167s
	I0930 03:21:22.303415    2019 ssh_runner.go:195] Run: cat /version.json
	I0930 03:21:22.303424    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:22.303447    2019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 03:21:22.303478    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:22.326762    2019 ssh_runner.go:195] Run: systemctl --version
	I0930 03:21:22.417618    2019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 03:21:22.420255    2019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 03:21:22.420303    2019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 03:21:22.428700    2019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 03:21:22.428709    2019 start.go:495] detecting cgroup driver to use...
	I0930 03:21:22.428873    2019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 03:21:22.437350    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 03:21:22.441847    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 03:21:22.445994    2019 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 03:21:22.446023    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 03:21:22.450024    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 03:21:22.453906    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 03:21:22.457588    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 03:21:22.461442    2019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 03:21:22.465299    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 03:21:22.469458    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 03:21:22.473330    2019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 03:21:22.477158    2019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 03:21:22.480683    2019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 03:21:22.480718    2019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 03:21:22.484925    2019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 03:21:22.488591    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:22.576726    2019 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 03:21:22.583893    2019 start.go:495] detecting cgroup driver to use...
	I0930 03:21:22.583976    2019 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 03:21:22.591720    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 03:21:22.597580    2019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 03:21:22.604201    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 03:21:22.609593    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 03:21:22.615003    2019 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 03:21:22.657450    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 03:21:22.664007    2019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 03:21:22.670779    2019 ssh_runner.go:195] Run: which cri-dockerd
	I0930 03:21:22.672132    2019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 03:21:22.675303    2019 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0930 03:21:22.681071    2019 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 03:21:22.766264    2019 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 03:21:22.841434    2019 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 03:21:22.841490    2019 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 03:21:22.847725    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:22.927544    2019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 03:21:25.115360    2019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.187866083s)
	I0930 03:21:25.115441    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 03:21:25.120884    2019 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0930 03:21:25.128142    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 03:21:25.133431    2019 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 03:21:25.210709    2019 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 03:21:25.294293    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:25.378512    2019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 03:21:25.385700    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 03:21:25.391293    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:25.470760    2019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 03:21:25.496012    2019 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 03:21:25.496130    2019 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 03:21:25.499594    2019 start.go:563] Will wait 60s for crictl version
	I0930 03:21:25.499646    2019 ssh_runner.go:195] Run: which crictl
	I0930 03:21:25.501043    2019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 03:21:25.518851    2019 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0930 03:21:25.518941    2019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 03:21:25.530547    2019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 03:21:25.548653    2019 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0930 03:21:25.548819    2019 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0930 03:21:25.550230    2019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 03:21:25.554394    2019 kubeadm.go:883] updating cluster {Name:addons-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 03:21:25.554439    2019 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:21:25.554494    2019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 03:21:25.559879    2019 docker.go:685] Got preloaded images: 
	I0930 03:21:25.559887    2019 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0930 03:21:25.559931    2019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 03:21:25.563457    2019 ssh_runner.go:195] Run: which lz4
	I0930 03:21:25.564961    2019 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 03:21:25.566530    2019 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 03:21:25.566544    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0930 03:21:26.846079    2019 docker.go:649] duration metric: took 1.281199334s to copy over tarball
	I0930 03:21:26.846150    2019 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 03:21:27.790380    2019 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 03:21:27.805253    2019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 03:21:27.809104    2019 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0930 03:21:27.814779    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:27.886948    2019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 03:21:30.090956    2019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.204059334s)
	I0930 03:21:30.091077    2019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 03:21:30.100600    2019 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 03:21:30.100614    2019 cache_images.go:84] Images are preloaded, skipping loading
	I0930 03:21:30.100618    2019 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0930 03:21:30.100676    2019 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-584000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 03:21:30.100753    2019 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 03:21:30.122272    2019 cni.go:84] Creating CNI manager for ""
	I0930 03:21:30.122289    2019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 03:21:30.122295    2019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 03:21:30.122305    2019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-584000 NodeName:addons-584000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 03:21:30.122370    2019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-584000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 03:21:30.122438    2019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 03:21:30.126133    2019 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 03:21:30.126173    2019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 03:21:30.129632    2019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 03:21:30.135577    2019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 03:21:30.141363    2019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0930 03:21:30.147324    2019 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0930 03:21:30.148574    2019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 03:21:30.152817    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:30.237930    2019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 03:21:30.244528    2019 certs.go:68] Setting up /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000 for IP: 192.168.105.2
	I0930 03:21:30.244551    2019 certs.go:194] generating shared ca certs ...
	I0930 03:21:30.244562    2019 certs.go:226] acquiring lock for ca certs: {Name:mkeec9701f93539137211ace80b844b19e48dcd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.244755    2019 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key
	I0930 03:21:30.358289    2019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt ...
	I0930 03:21:30.358297    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt: {Name:mkabf69fc987d23492963bea77413679244650dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.358572    2019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key ...
	I0930 03:21:30.358575    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key: {Name:mk5874a27c9334655aa717b755bc4cffc8a9e8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.358702    2019 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key
	I0930 03:21:30.506356    2019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt ...
	I0930 03:21:30.506367    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt: {Name:mk9fea6f9e3ab73d9f68a02a09ff642af56fd5be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.506625    2019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key ...
	I0930 03:21:30.506628    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key: {Name:mk4f7da49d670bfd13d2ce3b0970af64d509a8a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.506757    2019 certs.go:256] generating profile certs ...
	I0930 03:21:30.506791    2019 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.key
	I0930 03:21:30.506801    2019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt with IP's: []
	I0930 03:21:30.547293    2019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt ...
	I0930 03:21:30.547297    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: {Name:mkaf2983b31a0b33b4d23b307668224ae35962f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.547439    2019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.key ...
	I0930 03:21:30.547442    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.key: {Name:mk75b977f252c234f31722f9f626d7999c0bb264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.547566    2019 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key.e19569d5
	I0930 03:21:30.547575    2019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt.e19569d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0930 03:21:30.602012    2019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt.e19569d5 ...
	I0930 03:21:30.602015    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt.e19569d5: {Name:mka90dbbf8199cf550972076a1625c172b06f1c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.602157    2019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key.e19569d5 ...
	I0930 03:21:30.602160    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key.e19569d5: {Name:mk7ffbfcea18c2f5796f5142b362810d518744fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.602275    2019 certs.go:381] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt.e19569d5 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt
	I0930 03:21:30.602391    2019 certs.go:385] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key.e19569d5 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key
	I0930 03:21:30.602481    2019 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.key
	I0930 03:21:30.602489    2019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.crt with IP's: []
	I0930 03:21:30.738722    2019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.crt ...
	I0930 03:21:30.738726    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.crt: {Name:mkdf7892b1a7b8deb650d28448de804d10cae38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.738873    2019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.key ...
	I0930 03:21:30.738876    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.key: {Name:mke5603a185d280b9ff1d5630bb5350152481332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:30.739141    2019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 03:21:30.739165    2019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem (1078 bytes)
	I0930 03:21:30.739186    2019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem (1123 bytes)
	I0930 03:21:30.739206    2019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem (1675 bytes)
	I0930 03:21:30.739686    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 03:21:30.749281    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 03:21:30.757653    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 03:21:30.765879    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 03:21:30.774109    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 03:21:30.782012    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 03:21:30.789964    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 03:21:30.797937    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 03:21:30.806055    2019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 03:21:30.814130    2019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 03:21:30.820742    2019 ssh_runner.go:195] Run: openssl version
	I0930 03:21:30.822851    2019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 03:21:30.826633    2019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 03:21:30.828189    2019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 03:21:30.828219    2019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 03:21:30.830315    2019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 03:21:30.834164    2019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 03:21:30.835576    2019 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 03:21:30.835616    2019 kubeadm.go:392] StartCluster: {Name:addons-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:21:30.835697    2019 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 03:21:30.841272    2019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 03:21:30.845435    2019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 03:21:30.849145    2019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 03:21:30.852791    2019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 03:21:30.852797    2019 kubeadm.go:157] found existing configuration files:
	
	I0930 03:21:30.852824    2019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 03:21:30.856510    2019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 03:21:30.856539    2019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 03:21:30.859936    2019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 03:21:30.863237    2019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 03:21:30.863265    2019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 03:21:30.866563    2019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 03:21:30.869791    2019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 03:21:30.869820    2019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 03:21:30.873525    2019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 03:21:30.877176    2019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 03:21:30.877205    2019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 03:21:30.881659    2019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 03:21:30.912787    2019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 03:21:30.912817    2019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 03:21:30.949820    2019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 03:21:30.949877    2019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 03:21:30.949921    2019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 03:21:30.954034    2019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 03:21:30.967232    2019 out.go:235]   - Generating certificates and keys ...
	I0930 03:21:30.967264    2019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 03:21:30.967296    2019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 03:21:30.984515    2019 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 03:21:31.145945    2019 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 03:21:31.312389    2019 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 03:21:31.379354    2019 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 03:21:31.441610    2019 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 03:21:31.441665    2019 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-584000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0930 03:21:31.540661    2019 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 03:21:31.540737    2019 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-584000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0930 03:21:31.589080    2019 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 03:21:31.767573    2019 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 03:21:31.851836    2019 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 03:21:31.851874    2019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 03:21:31.980863    2019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 03:21:32.116201    2019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 03:21:32.202451    2019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 03:21:32.258434    2019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 03:21:32.324205    2019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 03:21:32.324540    2019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 03:21:32.325822    2019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 03:21:32.330961    2019 out.go:235]   - Booting up control plane ...
	I0930 03:21:32.331007    2019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 03:21:32.331048    2019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 03:21:32.331080    2019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 03:21:32.333348    2019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 03:21:32.336385    2019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 03:21:32.336421    2019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 03:21:32.418500    2019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 03:21:32.418586    2019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 03:21:32.930746    2019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.271959ms
	I0930 03:21:32.931017    2019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 03:21:36.431519    2019 kubeadm.go:310] [api-check] The API server is healthy after 3.500688044s
	I0930 03:21:36.437096    2019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 03:21:36.441561    2019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 03:21:36.449489    2019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 03:21:36.449593    2019 kubeadm.go:310] [mark-control-plane] Marking the node addons-584000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 03:21:36.453015    2019 kubeadm.go:310] [bootstrap-token] Using token: jhaagr.0kavycp1p5lrlfzd
	I0930 03:21:36.459232    2019 out.go:235]   - Configuring RBAC rules ...
	I0930 03:21:36.459291    2019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 03:21:36.460289    2019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 03:21:36.468313    2019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 03:21:36.469313    2019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 03:21:36.470417    2019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 03:21:36.471388    2019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 03:21:36.839054    2019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 03:21:37.240063    2019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 03:21:37.841594    2019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 03:21:37.843515    2019 kubeadm.go:310] 
	I0930 03:21:37.843610    2019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 03:21:37.843636    2019 kubeadm.go:310] 
	I0930 03:21:37.843787    2019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 03:21:37.843799    2019 kubeadm.go:310] 
	I0930 03:21:37.843842    2019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 03:21:37.843943    2019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 03:21:37.844028    2019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 03:21:37.844040    2019 kubeadm.go:310] 
	I0930 03:21:37.844141    2019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 03:21:37.844152    2019 kubeadm.go:310] 
	I0930 03:21:37.844223    2019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 03:21:37.844236    2019 kubeadm.go:310] 
	I0930 03:21:37.844322    2019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 03:21:37.844497    2019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 03:21:37.844612    2019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 03:21:37.844624    2019 kubeadm.go:310] 
	I0930 03:21:37.844777    2019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 03:21:37.844912    2019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 03:21:37.844921    2019 kubeadm.go:310] 
	I0930 03:21:37.845067    2019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhaagr.0kavycp1p5lrlfzd \
	I0930 03:21:37.845258    2019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d \
	I0930 03:21:37.845299    2019 kubeadm.go:310] 	--control-plane 
	I0930 03:21:37.845311    2019 kubeadm.go:310] 
	I0930 03:21:37.845441    2019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 03:21:37.845455    2019 kubeadm.go:310] 
	I0930 03:21:37.845575    2019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhaagr.0kavycp1p5lrlfzd \
	I0930 03:21:37.845746    2019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d 
	I0930 03:21:37.846616    2019 kubeadm.go:310] W0930 10:21:30.487960    1602 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 03:21:37.847109    2019 kubeadm.go:310] W0930 10:21:30.491244    1602 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 03:21:37.847335    2019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 03:21:37.847375    2019 cni.go:84] Creating CNI manager for ""
	I0930 03:21:37.847411    2019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 03:21:37.854353    2019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 03:21:37.858641    2019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 03:21:37.872271    2019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 03:21:37.891649    2019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 03:21:37.891786    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:37.891874    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-584000 minikube.k8s.io/updated_at=2024_09_30T03_21_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-584000 minikube.k8s.io/primary=true
	I0930 03:21:37.911731    2019 ops.go:34] apiserver oom_adj: -16
	I0930 03:21:37.957256    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:38.459362    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:38.958195    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:39.459370    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:39.959401    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:40.459299    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:40.957983    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:41.459189    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:41.959288    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:42.459269    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:42.959246    2019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 03:21:42.995958    2019 kubeadm.go:1113] duration metric: took 5.104461625s to wait for elevateKubeSystemPrivileges
	I0930 03:21:42.995976    2019 kubeadm.go:394] duration metric: took 12.160736041s to StartCluster
	I0930 03:21:42.995986    2019 settings.go:142] acquiring lock: {Name:mk8d331f80592adde11c8565cba0670e3b2db485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:42.996173    2019 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:21:42.996389    2019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:21:42.996645    2019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 03:21:42.996669    2019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 03:21:42.996680    2019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 03:21:42.996725    2019 addons.go:69] Setting yakd=true in profile "addons-584000"
	I0930 03:21:42.996732    2019 addons.go:234] Setting addon yakd=true in "addons-584000"
	I0930 03:21:42.996744    2019 addons.go:69] Setting default-storageclass=true in profile "addons-584000"
	I0930 03:21:42.996756    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996764    2019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-584000"
	I0930 03:21:42.996766    2019 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-584000"
	I0930 03:21:42.996784    2019 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-584000"
	I0930 03:21:42.996794    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996802    2019 addons.go:69] Setting ingress-dns=true in profile "addons-584000"
	I0930 03:21:42.996812    2019 addons.go:234] Setting addon ingress-dns=true in "addons-584000"
	I0930 03:21:42.996831    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996841    2019 addons.go:69] Setting ingress=true in profile "addons-584000"
	I0930 03:21:42.996854    2019 addons.go:234] Setting addon ingress=true in "addons-584000"
	I0930 03:21:42.996870    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996883    2019 addons.go:69] Setting inspektor-gadget=true in profile "addons-584000"
	I0930 03:21:42.996887    2019 addons.go:234] Setting addon inspektor-gadget=true in "addons-584000"
	I0930 03:21:42.996894    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996728    2019 addons.go:69] Setting gcp-auth=true in profile "addons-584000"
	I0930 03:21:42.996944    2019 mustload.go:65] Loading cluster: addons-584000
	I0930 03:21:42.996980    2019 addons.go:69] Setting metrics-server=true in profile "addons-584000"
	I0930 03:21:42.996989    2019 addons.go:234] Setting addon metrics-server=true in "addons-584000"
	I0930 03:21:42.996996    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.997000    2019 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-584000"
	I0930 03:21:42.997005    2019 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-584000"
	I0930 03:21:42.997024    2019 addons.go:69] Setting volcano=true in profile "addons-584000"
	I0930 03:21:42.997029    2019 addons.go:234] Setting addon volcano=true in "addons-584000"
	I0930 03:21:42.997036    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.997068    2019 addons.go:69] Setting registry=true in profile "addons-584000"
	I0930 03:21:42.997073    2019 addons.go:234] Setting addon registry=true in "addons-584000"
	I0930 03:21:42.997082    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996740    2019 addons.go:69] Setting cloud-spanner=true in profile "addons-584000"
	I0930 03:21:42.997142    2019 addons.go:234] Setting addon cloud-spanner=true in "addons-584000"
	I0930 03:21:42.997145    2019 addons.go:69] Setting storage-provisioner=true in profile "addons-584000"
	I0930 03:21:42.997148    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.997150    2019 addons.go:234] Setting addon storage-provisioner=true in "addons-584000"
	I0930 03:21:42.997160    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.997188    2019 addons.go:69] Setting volumesnapshots=true in profile "addons-584000"
	I0930 03:21:42.997194    2019 addons.go:234] Setting addon volumesnapshots=true in "addons-584000"
	I0930 03:21:42.997201    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.996742    2019 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-584000"
	I0930 03:21:42.997230    2019 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-584000"
	I0930 03:21:42.997242    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:42.999205    2019 config.go:182] Loaded profile config "addons-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:21:42.999430    2019 retry.go:31] will retry after 920.950477ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999436    2019 retry.go:31] will retry after 887.747733ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999486    2019 config.go:182] Loaded profile config "addons-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:21:42.999611    2019 retry.go:31] will retry after 981.580095ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999614    2019 retry.go:31] will retry after 641.437271ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999628    2019 retry.go:31] will retry after 712.495312ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999664    2019 retry.go:31] will retry after 806.972235ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999722    2019 retry.go:31] will retry after 1.156241307s: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999717    2019 retry.go:31] will retry after 1.482507225s: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999812    2019 retry.go:31] will retry after 1.161607841s: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999813    2019 retry.go:31] will retry after 907.286216ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999849    2019 retry.go:31] will retry after 700.244972ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999854    2019 retry.go:31] will retry after 828.186373ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:42.999904    2019 retry.go:31] will retry after 936.421513ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/monitor: connect: connection refused
	I0930 03:21:43.002418    2019 out.go:177] * Verifying Kubernetes components...
	I0930 03:21:43.009379    2019 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 03:21:43.013468    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 03:21:43.013504    2019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 03:21:43.025539    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 03:21:43.025555    2019 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 03:21:43.032435    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 03:21:43.035475    2019 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 03:21:43.039439    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 03:21:43.039556    2019 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 03:21:43.039606    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 03:21:43.039615    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.045340    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 03:21:43.046073    2019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 03:21:43.052446    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 03:21:43.055418    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 03:21:43.062445    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 03:21:43.065427    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 03:21:43.065437    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 03:21:43.065449    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.146793    2019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 03:21:43.186031    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 03:21:43.186044    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 03:21:43.191128    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 03:21:43.194843    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 03:21:43.194852    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 03:21:43.200706    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 03:21:43.200717    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 03:21:43.263567    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 03:21:43.263584    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 03:21:43.284144    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 03:21:43.284158    2019 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 03:21:43.306223    2019 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0930 03:21:43.307657    2019 node_ready.go:35] waiting up to 6m0s for node "addons-584000" to be "Ready" ...
	I0930 03:21:43.314576    2019 node_ready.go:49] node "addons-584000" has status "Ready":"True"
	I0930 03:21:43.314596    2019 node_ready.go:38] duration metric: took 6.911417ms for node "addons-584000" to be "Ready" ...
	I0930 03:21:43.314601    2019 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 03:21:43.321625    2019 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace to be "Ready" ...
	I0930 03:21:43.324986    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 03:21:43.324995    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 03:21:43.361436    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 03:21:43.361449    2019 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 03:21:43.386792    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 03:21:43.386801    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 03:21:43.412426    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 03:21:43.412438    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 03:21:43.431722    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 03:21:43.431734    2019 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 03:21:43.470297    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 03:21:43.643554    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:43.705658    2019 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0930 03:21:43.708616    2019 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0930 03:21:43.712591    2019 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0930 03:21:43.717152    2019 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 03:21:43.717163    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0930 03:21:43.717175    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.721600    2019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 03:21:43.725648    2019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 03:21:43.725656    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 03:21:43.725666    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.810466    2019 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 03:21:43.814563    2019 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 03:21:43.814572    2019 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 03:21:43.814584    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.816326    2019 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-584000" context rescaled to 1 replicas
	I0930 03:21:43.831569    2019 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 03:21:43.835634    2019 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 03:21:43.835641    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 03:21:43.835650    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.859683    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 03:21:43.872049    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 03:21:43.890622    2019 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 03:21:43.893632    2019 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 03:21:43.893640    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 03:21:43.893651    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.911523    2019 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 03:21:43.915611    2019 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 03:21:43.915620    2019 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 03:21:43.915633    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.925576    2019 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 03:21:43.929553    2019 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 03:21:43.929563    2019 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 03:21:43.929574    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.931557    2019 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 03:21:43.931564    2019 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 03:21:43.934606    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 03:21:43.940618    2019 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 03:21:43.943643    2019 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 03:21:43.947598    2019 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 03:21:43.947607    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 03:21:43.947617    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:43.975546    2019 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 03:21:43.975558    2019 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 03:21:43.986563    2019 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 03:21:43.990578    2019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 03:21:43.990590    2019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 03:21:43.990602    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:44.016204    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 03:21:44.018439    2019 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 03:21:44.018445    2019 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 03:21:44.071490    2019 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 03:21:44.071504    2019 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 03:21:44.077046    2019 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 03:21:44.077056    2019 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 03:21:44.085593    2019 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 03:21:44.085607    2019 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 03:21:44.104327    2019 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 03:21:44.104337    2019 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 03:21:44.113961    2019 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 03:21:44.113972    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 03:21:44.154091    2019 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 03:21:44.154105    2019 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 03:21:44.157110    2019 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-584000"
	I0930 03:21:44.157126    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:44.161449    2019 out.go:177]   - Using image docker.io/busybox:stable
	I0930 03:21:44.169560    2019 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 03:21:44.172534    2019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 03:21:44.172544    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 03:21:44.172554    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:44.177579    2019 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 03:21:44.178446    2019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 03:21:44.178453    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 03:21:44.181600    2019 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 03:21:44.181607    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 03:21:44.181616    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:44.198116    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 03:21:44.208432    2019 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 03:21:44.208443    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 03:21:44.227507    2019 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 03:21:44.227523    2019 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 03:21:44.234154    2019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 03:21:44.234164    2019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 03:21:44.268461    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 03:21:44.282665    2019 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 03:21:44.282680    2019 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 03:21:44.305139    2019 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 03:21:44.305155    2019 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 03:21:44.309306    2019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 03:21:44.309318    2019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 03:21:44.328922    2019 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 03:21:44.328932    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 03:21:44.379989    2019 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 03:21:44.380004    2019 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 03:21:44.414266    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 03:21:44.417895    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 03:21:44.454912    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 03:21:44.466740    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 03:21:44.483847    2019 addons.go:234] Setting addon default-storageclass=true in "addons-584000"
	I0930 03:21:44.483868    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:44.484467    2019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 03:21:44.484474    2019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 03:21:44.484482    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:44.500186    2019 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 03:21:44.500199    2019 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 03:21:44.544478    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.353376959s)
	I0930 03:21:44.544516    2019 addons.go:475] Verifying addon ingress=true in "addons-584000"
	I0930 03:21:44.549628    2019 out.go:177] * Verifying ingress addon...
	I0930 03:21:44.558028    2019 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 03:21:44.560174    2019 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 03:21:44.560180    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:44.633651    2019 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 03:21:44.633663    2019 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 03:21:44.716047    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 03:21:44.735836    2019 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 03:21:44.735848    2019 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 03:21:44.898772    2019 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 03:21:44.898784    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 03:21:44.952312    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 03:21:45.097134    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:45.326750    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:45.569242    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:46.068564    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:46.216686    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.74645075s)
	I0930 03:21:46.216702    2019 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-584000"
	I0930 03:21:46.220635    2019 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 03:21:46.228228    2019 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 03:21:46.254384    2019 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 03:21:46.254393    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:46.573146    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:46.740833    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:47.103776    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:47.312818    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:47.365261    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:47.490447    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.618486583s)
	I0930 03:21:47.490478    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.630895917s)
	I0930 03:21:47.490519    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.556014334s)
	I0930 03:21:47.490557    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.474448042s)
	I0930 03:21:47.490601    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.292571916s)
	W0930 03:21:47.490613    2019 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 03:21:47.490619    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.222246333s)
	I0930 03:21:47.490624    2019 retry.go:31] will retry after 219.452837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 03:21:47.490625    2019 addons.go:475] Verifying addon registry=true in "addons-584000"
	I0930 03:21:47.490646    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.076465s)
	I0930 03:21:47.490695    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.072884041s)
	I0930 03:21:47.490738    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.035904583s)
	I0930 03:21:47.490743    2019 addons.go:475] Verifying addon metrics-server=true in "addons-584000"
	I0930 03:21:47.490759    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.024102542s)
	I0930 03:21:47.490791    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.774818166s)
	I0930 03:21:47.490828    2019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.538581375s)
	I0930 03:21:47.494571    2019 out.go:177] * Verifying registry addon...
	I0930 03:21:47.504575    2019 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-584000 service yakd-dashboard -n yakd-dashboard
	
	I0930 03:21:47.507981    2019 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0930 03:21:47.508019    2019 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0930 03:21:47.516865    2019 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 03:21:47.516873    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:47.615448    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:47.712228    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 03:21:47.732768    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:48.010900    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:48.112691    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:48.232816    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:48.511767    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:48.561716    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:48.732858    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:49.011926    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:49.113095    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:49.233125    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:49.511667    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:49.561926    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:49.732630    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:49.825996    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:50.011733    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:50.112120    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:50.232581    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:50.511345    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:50.611684    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:50.648962    2019 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 03:21:50.648977    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:50.677823    2019 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 03:21:50.683970    2019 addons.go:234] Setting addon gcp-auth=true in "addons-584000"
	I0930 03:21:50.683989    2019 host.go:66] Checking if "addons-584000" exists ...
	I0930 03:21:50.684722    2019 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 03:21:50.684729    2019 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/addons-584000/id_rsa Username:docker}
	I0930 03:21:50.713897    2019 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 03:21:50.720761    2019 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 03:21:50.725824    2019 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 03:21:50.725830    2019 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 03:21:50.730859    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:50.733008    2019 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 03:21:50.733014    2019 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 03:21:50.738740    2019 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 03:21:50.738747    2019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 03:21:50.744949    2019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 03:21:51.000705    2019 addons.go:475] Verifying addon gcp-auth=true in "addons-584000"
	I0930 03:21:51.006829    2019 out.go:177] * Verifying gcp-auth addon...
	I0930 03:21:51.011514    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:51.013192    2019 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 03:21:51.014066    2019 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 03:21:51.112450    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:51.232733    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:51.512994    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:51.563663    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:51.734174    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:51.832442    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:52.012077    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:52.062214    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:52.232811    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:52.514872    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:52.565783    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:52.738041    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:53.012094    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:53.063359    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:53.233507    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:53.519642    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:53.566952    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:53.734874    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:54.012180    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:54.061537    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:54.232432    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:54.327305    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:54.510789    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:54.563157    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:54.733944    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:55.011577    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:55.061395    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:55.232421    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:55.511511    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:55.561653    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:55.730842    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:56.011556    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:56.061496    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:56.232376    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:56.511918    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:56.561742    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:56.732197    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:56.826027    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:57.011579    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:57.061386    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:57.232292    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:57.511495    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:57.561299    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:57.732487    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:58.011684    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:58.112412    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:58.232271    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:58.511248    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:58.561405    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:58.732151    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:59.011365    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:59.061500    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:59.232238    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:21:59.325739    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:21:59.511133    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:21:59.561083    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:21:59.732185    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:00.011391    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:00.061206    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:00.232306    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:00.511502    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:00.561701    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:00.732129    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:01.011361    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:01.061457    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:01.232242    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:01.387015    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:01.511529    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:01.561122    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:01.732322    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:02.011752    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:02.112975    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:02.232383    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:02.511729    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:02.562600    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:02.734098    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:03.012456    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:03.062567    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:03.232441    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:03.511388    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:03.561892    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:03.732623    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:03.827121    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:04.012095    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:04.061933    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:04.232486    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:04.511318    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:04.561228    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:04.732202    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:05.011169    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:05.061544    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:05.232139    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:05.511441    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:05.560868    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:05.732086    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:06.010929    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:06.217654    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:06.312865    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:06.325285    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:06.511458    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:06.561317    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:06.732156    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:07.011204    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:07.062823    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:07.232307    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:07.510277    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:07.563518    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:07.735858    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:08.026208    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:08.064827    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:08.231856    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:08.328584    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:08.515516    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:08.568026    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:08.736428    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:09.023804    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:09.069955    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:09.233070    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:09.518454    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:09.569573    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:09.737900    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:10.017624    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:10.067128    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:10.232100    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:10.512081    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:10.561580    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:10.732695    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:10.828380    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:11.020017    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:11.064259    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:11.232198    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:11.511805    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:11.562142    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:11.731782    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:12.012228    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:12.061979    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:12.231658    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:12.511990    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:12.561888    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:12.731837    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:13.014265    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:13.063703    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:13.230358    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:13.325089    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:13.511310    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:13.561801    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:13.732198    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:14.017148    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:14.066280    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:14.232720    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:14.512474    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:14.563645    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:14.732438    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:15.011015    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:15.060879    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:15.231658    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:15.325630    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:15.510913    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:15.561147    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:15.731782    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:16.011442    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:16.061463    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:16.231996    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:16.512179    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:16.561377    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:16.731661    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:17.011839    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:17.061710    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:17.231585    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:17.511098    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:17.560556    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:17.731707    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:17.825557    2019 pod_ready.go:103] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"False"
	I0930 03:22:18.012218    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:18.062697    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:18.232400    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:18.510649    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:18.560362    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:18.731359    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:19.010239    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:19.060694    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:19.231636    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:19.510955    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:19.560975    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:19.732132    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:20.010823    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:20.060489    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:20.231546    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:20.325214    2019 pod_ready.go:93] pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.325224    2019 pod_ready.go:82] duration metric: took 37.004720666s for pod "coredns-7c65d6cfc9-6nzmp" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.325228    2019 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7nq9l" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.326060    2019 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-7nq9l" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-7nq9l" not found
	I0930 03:22:20.326067    2019 pod_ready.go:82] duration metric: took 835.083µs for pod "coredns-7c65d6cfc9-7nq9l" in "kube-system" namespace to be "Ready" ...
	E0930 03:22:20.326071    2019 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-7nq9l" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-7nq9l" not found
	I0930 03:22:20.326074    2019 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.327958    2019 pod_ready.go:93] pod "etcd-addons-584000" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.327963    2019 pod_ready.go:82] duration metric: took 1.8835ms for pod "etcd-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.327967    2019 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.330125    2019 pod_ready.go:93] pod "kube-apiserver-addons-584000" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.330130    2019 pod_ready.go:82] duration metric: took 2.160125ms for pod "kube-apiserver-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.330134    2019 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.331872    2019 pod_ready.go:93] pod "kube-controller-manager-addons-584000" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.331879    2019 pod_ready.go:82] duration metric: took 1.742834ms for pod "kube-controller-manager-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.331883    2019 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p4msl" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.511045    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:20.525848    2019 pod_ready.go:93] pod "kube-proxy-p4msl" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.525857    2019 pod_ready.go:82] duration metric: took 193.96325ms for pod "kube-proxy-p4msl" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.525862    2019 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.560958    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:20.729784    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:20.926152    2019 pod_ready.go:93] pod "kube-scheduler-addons-584000" in "kube-system" namespace has status "Ready":"True"
	I0930 03:22:20.926161    2019 pod_ready.go:82] duration metric: took 400.304333ms for pod "kube-scheduler-addons-584000" in "kube-system" namespace to be "Ready" ...
	I0930 03:22:20.926165    2019 pod_ready.go:39] duration metric: took 37.612713916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 03:22:20.926175    2019 api_server.go:52] waiting for apiserver process to appear ...
	I0930 03:22:20.926238    2019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 03:22:20.933003    2019 api_server.go:72] duration metric: took 37.9374865s to wait for apiserver process to appear ...
	I0930 03:22:20.933012    2019 api_server.go:88] waiting for apiserver healthz status ...
	I0930 03:22:20.933020    2019 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0930 03:22:20.935590    2019 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0930 03:22:20.936189    2019 api_server.go:141] control plane version: v1.31.1
	I0930 03:22:20.936195    2019 api_server.go:131] duration metric: took 3.179583ms to wait for apiserver health ...
	I0930 03:22:20.936198    2019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 03:22:21.009366    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:21.110336    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:21.129695    2019 system_pods.go:59] 17 kube-system pods found
	I0930 03:22:21.129706    2019 system_pods.go:61] "coredns-7c65d6cfc9-6nzmp" [dac2b8c3-136d-4329-bf17-67296822156d] Running
	I0930 03:22:21.129711    2019 system_pods.go:61] "csi-hostpath-attacher-0" [c2f45c88-7182-401c-87da-719f6af0de07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 03:22:21.129715    2019 system_pods.go:61] "csi-hostpath-resizer-0" [18e3384f-4ebb-4f33-b166-3a212552a7be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 03:22:21.129720    2019 system_pods.go:61] "csi-hostpathplugin-q6gtd" [c73cf9ee-3e5f-485c-be35-388f94df0762] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 03:22:21.129722    2019 system_pods.go:61] "etcd-addons-584000" [2da5ff60-1f1e-4dd4-871b-06de65e1e88a] Running
	I0930 03:22:21.129724    2019 system_pods.go:61] "kube-apiserver-addons-584000" [ab02e109-c23a-4804-8699-797839a69a66] Running
	I0930 03:22:21.129727    2019 system_pods.go:61] "kube-controller-manager-addons-584000" [2b3507e9-ed88-4e99-b64b-6a468b705511] Running
	I0930 03:22:21.129729    2019 system_pods.go:61] "kube-ingress-dns-minikube" [1df66f2b-ffef-4073-a2fe-c032c6bf966a] Running
	I0930 03:22:21.129731    2019 system_pods.go:61] "kube-proxy-p4msl" [95f08ad7-22b2-4679-9580-0c4a43c6ba9e] Running
	I0930 03:22:21.129732    2019 system_pods.go:61] "kube-scheduler-addons-584000" [2aeb8cf6-4fb7-4dea-aa9e-b5c36b422acf] Running
	I0930 03:22:21.129735    2019 system_pods.go:61] "metrics-server-84c5f94fbc-rmskq" [e7606862-ddb6-4102-aa0d-f00de94eaba5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 03:22:21.129738    2019 system_pods.go:61] "nvidia-device-plugin-daemonset-kvn2s" [cf218953-21b1-4281-8276-c5c77d83e6eb] Running
	I0930 03:22:21.129741    2019 system_pods.go:61] "registry-66c9cd494c-j6ss4" [230e32a5-8b5f-413f-b994-093070028d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 03:22:21.129744    2019 system_pods.go:61] "registry-proxy-tkz2n" [151b7d8c-f9bc-4089-a54a-897445c55163] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 03:22:21.129747    2019 system_pods.go:61] "snapshot-controller-56fcc65765-hvtbm" [8e8c281d-45d9-4c42-8f1e-ec92317a9b4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 03:22:21.129750    2019 system_pods.go:61] "snapshot-controller-56fcc65765-tdrht" [540f325e-982c-46c0-96a4-934f1f888213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 03:22:21.129761    2019 system_pods.go:61] "storage-provisioner" [992d4a07-9dbc-45c6-b9b6-ddc28e58d2f8] Running
	I0930 03:22:21.129767    2019 system_pods.go:74] duration metric: took 193.571125ms to wait for pod list to return data ...
	I0930 03:22:21.129771    2019 default_sa.go:34] waiting for default service account to be created ...
	I0930 03:22:21.231608    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:21.326109    2019 default_sa.go:45] found service account: "default"
	I0930 03:22:21.326124    2019 default_sa.go:55] duration metric: took 196.354625ms for default service account to be created ...
	I0930 03:22:21.326128    2019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 03:22:21.512152    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:21.535168    2019 system_pods.go:86] 17 kube-system pods found
	I0930 03:22:21.535187    2019 system_pods.go:89] "coredns-7c65d6cfc9-6nzmp" [dac2b8c3-136d-4329-bf17-67296822156d] Running
	I0930 03:22:21.535197    2019 system_pods.go:89] "csi-hostpath-attacher-0" [c2f45c88-7182-401c-87da-719f6af0de07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 03:22:21.535204    2019 system_pods.go:89] "csi-hostpath-resizer-0" [18e3384f-4ebb-4f33-b166-3a212552a7be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 03:22:21.535212    2019 system_pods.go:89] "csi-hostpathplugin-q6gtd" [c73cf9ee-3e5f-485c-be35-388f94df0762] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 03:22:21.535219    2019 system_pods.go:89] "etcd-addons-584000" [2da5ff60-1f1e-4dd4-871b-06de65e1e88a] Running
	I0930 03:22:21.535224    2019 system_pods.go:89] "kube-apiserver-addons-584000" [ab02e109-c23a-4804-8699-797839a69a66] Running
	I0930 03:22:21.535229    2019 system_pods.go:89] "kube-controller-manager-addons-584000" [2b3507e9-ed88-4e99-b64b-6a468b705511] Running
	I0930 03:22:21.535239    2019 system_pods.go:89] "kube-ingress-dns-minikube" [1df66f2b-ffef-4073-a2fe-c032c6bf966a] Running
	I0930 03:22:21.535245    2019 system_pods.go:89] "kube-proxy-p4msl" [95f08ad7-22b2-4679-9580-0c4a43c6ba9e] Running
	I0930 03:22:21.535255    2019 system_pods.go:89] "kube-scheduler-addons-584000" [2aeb8cf6-4fb7-4dea-aa9e-b5c36b422acf] Running
	I0930 03:22:21.535260    2019 system_pods.go:89] "metrics-server-84c5f94fbc-rmskq" [e7606862-ddb6-4102-aa0d-f00de94eaba5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 03:22:21.535267    2019 system_pods.go:89] "nvidia-device-plugin-daemonset-kvn2s" [cf218953-21b1-4281-8276-c5c77d83e6eb] Running
	I0930 03:22:21.535272    2019 system_pods.go:89] "registry-66c9cd494c-j6ss4" [230e32a5-8b5f-413f-b994-093070028d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 03:22:21.535278    2019 system_pods.go:89] "registry-proxy-tkz2n" [151b7d8c-f9bc-4089-a54a-897445c55163] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 03:22:21.535285    2019 system_pods.go:89] "snapshot-controller-56fcc65765-hvtbm" [8e8c281d-45d9-4c42-8f1e-ec92317a9b4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 03:22:21.535291    2019 system_pods.go:89] "snapshot-controller-56fcc65765-tdrht" [540f325e-982c-46c0-96a4-934f1f888213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 03:22:21.535296    2019 system_pods.go:89] "storage-provisioner" [992d4a07-9dbc-45c6-b9b6-ddc28e58d2f8] Running
	I0930 03:22:21.535304    2019 system_pods.go:126] duration metric: took 209.177125ms to wait for k8s-apps to be running ...
	I0930 03:22:21.535311    2019 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 03:22:21.535439    2019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 03:22:21.548031    2019 system_svc.go:56] duration metric: took 12.714167ms WaitForService to wait for kubelet
	I0930 03:22:21.548057    2019 kubeadm.go:582] duration metric: took 38.552557416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 03:22:21.548080    2019 node_conditions.go:102] verifying NodePressure condition ...
	I0930 03:22:21.562495    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:21.729418    2019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 03:22:21.729435    2019 node_conditions.go:123] node cpu capacity is 2
	I0930 03:22:21.729446    2019 node_conditions.go:105] duration metric: took 181.365958ms to run NodePressure ...
	I0930 03:22:21.729457    2019 start.go:241] waiting for startup goroutines ...
	I0930 03:22:21.733800    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:22.017357    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:22.063152    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:22.232472    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:22.510785    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:22.560706    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:22.731779    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:23.011216    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:23.060982    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:23.231502    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:23.510899    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:23.562818    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:23.733402    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:24.017978    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:24.068500    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:24.233278    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:24.516696    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:24.565133    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:24.736590    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:25.019463    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:25.068602    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:25.233060    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:25.517848    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:25.567364    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:25.731988    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:26.011676    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:26.061466    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:26.231413    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:26.510055    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:26.560459    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:26.731185    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:27.008942    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:27.060825    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:27.231861    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:27.510910    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:27.560265    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:27.731463    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:28.011998    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:28.062052    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:28.238706    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:28.512292    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:28.562337    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:28.734148    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:29.015452    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:29.068029    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:29.233483    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:29.517517    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:29.566801    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:29.737662    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:30.012989    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:30.062655    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:30.229338    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:30.508617    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:30.560878    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:30.731253    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:31.010684    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:31.060264    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:31.231135    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:31.510249    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:31.560279    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:31.731036    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:32.010128    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:32.060580    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:32.231332    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:32.510310    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:32.560246    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:32.732317    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:33.013666    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:33.063961    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:33.234030    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:33.512036    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:33.563197    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:33.734604    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:34.016583    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:34.066761    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:34.239952    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:34.518316    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:34.565733    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:34.737588    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:35.014000    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:35.066351    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:35.231959    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:35.515355    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:35.563153    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:35.734883    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:36.016496    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:36.067303    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:36.234163    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:36.512750    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:36.563854    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:36.732045    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:37.010890    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:37.060618    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:37.231274    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:37.510293    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:37.560337    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:37.731173    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:38.011470    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:38.062283    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:38.233902    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:38.511694    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:38.564858    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:38.732937    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:39.016211    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:39.065181    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:39.233535    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:39.514764    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:39.565061    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:39.733463    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:40.010328    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:40.060162    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:40.230952    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:40.510631    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:40.560295    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:40.731076    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:41.010102    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:41.060201    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:41.230731    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:41.510090    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:41.563539    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:41.730905    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:42.017441    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:42.063816    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:42.230598    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:42.512835    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:42.563960    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:42.731091    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:43.013042    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:43.063914    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:43.232020    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:43.510378    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:43.561128    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:43.731611    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:44.012500    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:44.064760    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:44.231653    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:44.510613    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:44.560326    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:44.730794    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:45.010423    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:45.059852    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:45.230719    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:45.507795    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:45.559906    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:45.730993    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:46.013000    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 03:22:46.061290    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:46.234438    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:46.511996    2019 kapi.go:107] duration metric: took 59.005823084s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 03:22:46.560688    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:46.732452    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:47.062052    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:47.231397    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:47.560371    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:47.732193    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:48.065997    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:48.236997    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:48.561684    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:48.737939    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:49.059346    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:49.230408    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:49.559758    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:49.730513    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:50.060429    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:50.230259    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:50.617093    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:50.730929    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:51.063314    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:51.235143    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:51.562509    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:51.735965    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:52.065285    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:52.233818    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:52.565717    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:52.733523    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:53.063389    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:53.232829    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:53.561163    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:53.731532    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:54.059829    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:54.230674    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:54.559820    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:54.730707    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:55.059895    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:55.230585    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:55.560751    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:55.731218    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:56.060855    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:56.231193    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:56.559723    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:56.730392    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:57.061297    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:57.230709    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:57.560033    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:57.732317    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:58.061817    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:58.230511    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:58.569955    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:58.732559    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:59.063341    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:59.233338    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:22:59.562075    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:22:59.732761    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:00.061705    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:00.229158    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:00.558779    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:00.730395    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:01.059327    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:01.230476    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:01.559498    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:01.730386    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:02.059526    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:02.231199    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:02.564022    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:02.730297    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:03.059553    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:03.230262    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:03.560074    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:03.728937    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:04.059481    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:04.230396    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:04.559375    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:04.730296    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:05.059734    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:05.230508    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:05.560387    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:05.730350    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:06.063061    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:06.230425    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:06.559289    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:06.730188    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:07.116736    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:07.230130    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:07.559199    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:07.730070    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:08.059267    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:08.230135    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:08.559155    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:08.730219    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:09.059136    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:09.230534    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:09.560287    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:09.730229    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:10.059225    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:10.230053    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:10.616288    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:10.729882    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:11.061767    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:11.230727    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:11.561810    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:11.730167    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:12.059532    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:12.230153    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:12.559457    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:12.730132    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:13.081289    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:13.230105    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:13.559233    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:13.730914    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:14.058795    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:14.230159    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:14.559228    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:14.730072    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:15.059416    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:15.229946    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:15.559080    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:15.730180    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:16.059925    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:16.231197    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:16.560469    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:16.729911    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:17.119017    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:17.230013    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:17.559148    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:17.730315    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:18.059761    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:18.229988    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:18.560660    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:18.731346    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:19.064058    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:19.231182    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:19.564753    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:19.738246    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:20.067324    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:20.237281    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:20.560406    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:20.732371    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:21.058948    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:21.229909    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:21.558837    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:21.729639    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:22.058774    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:22.229914    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:22.559410    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:22.729441    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:23.058901    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:23.229833    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:23.558841    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:23.728386    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:24.058450    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:24.229621    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:24.558595    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:24.729446    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:25.058546    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:25.229556    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:25.558917    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:25.729828    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:26.058965    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:26.229757    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:26.559046    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:26.729635    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:27.125606    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:27.229375    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:27.559259    2019 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 03:23:27.729790    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:28.058913    2019 kapi.go:107] duration metric: took 1m43.504064s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 03:23:28.231218    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:28.730057    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:29.232885    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:29.735891    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:30.229457    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:30.771931    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:31.230891    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:31.735211    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:32.229774    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:32.738206    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:33.230275    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:33.729429    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:34.229602    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:34.729556    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:35.229700    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:35.729744    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:36.229449    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:36.732679    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:37.229426    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:37.729680    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:38.230017    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:38.791921    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:39.230873    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:39.731929    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:40.228578    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:40.730613    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:41.231754    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:41.735089    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:42.228720    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:42.729604    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:43.229106    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:43.729293    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:44.231871    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:44.730756    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:45.229415    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:45.731204    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:46.232151    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:46.728770    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:47.228943    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:47.728823    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:48.227135    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:48.727765    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:49.230734    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 03:23:49.731251    2019 kapi.go:107] duration metric: took 2m3.506856125s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 03:24:35.033478    2019 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 03:24:35.033486    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:35.510622    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:36.012796    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:36.513458    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:37.017012    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:37.512408    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:38.012504    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:38.512822    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:39.014293    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:39.516851    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:40.019297    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:40.512569    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:41.012551    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:41.512477    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:42.016862    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:42.513350    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:43.016668    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:43.513578    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:44.017186    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:44.512973    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:45.013214    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:45.512231    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:46.011929    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:46.513315    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:47.013373    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:47.512467    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:48.012853    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:48.513831    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:49.015086    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:49.513995    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:50.018938    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:50.513419    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:51.016082    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:51.513236    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:52.016767    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:52.512269    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:53.012559    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:53.512118    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:54.013774    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:54.513485    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:55.011171    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:55.511618    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:56.012463    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:56.515594    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:57.011429    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:57.511707    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:58.009925    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:58.512331    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:59.013875    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:24:59.512314    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:00.015705    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:00.512036    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:01.010889    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:01.512294    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:02.018215    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:02.511798    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:03.012264    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:03.512200    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:04.017397    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:04.513280    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:05.020912    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:05.516161    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:06.011606    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:06.512299    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:07.016183    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:07.511671    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:08.010769    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:08.512546    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:09.015699    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:09.512275    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:10.016283    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:10.511569    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:11.011791    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:11.511209    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:12.008751    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:12.511661    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:13.010671    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:13.512021    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:14.012793    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:14.518572    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:15.010116    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:15.510663    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:16.010595    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:16.510557    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:17.012695    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:17.510807    2019 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 03:25:18.010577    2019 kapi.go:107] duration metric: took 3m27.003742958s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 03:25:18.014933    2019 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-584000 cluster.
	I0930 03:25:18.017951    2019 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 03:25:18.022992    2019 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 03:25:18.026959    2019 out.go:177] * Enabled addons: storage-provisioner, volcano, ingress-dns, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0930 03:25:18.030927    2019 addons.go:510] duration metric: took 3m35.040856375s for enable addons: enabled=[storage-provisioner volcano ingress-dns cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0930 03:25:18.030939    2019 start.go:246] waiting for cluster config update ...
	I0930 03:25:18.030948    2019 start.go:255] writing updated cluster config ...
	I0930 03:25:18.031422    2019 ssh_runner.go:195] Run: rm -f paused
	I0930 03:25:18.183450    2019 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0930 03:25:18.187890    2019 out.go:201] 
	W0930 03:25:18.192010    2019 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0930 03:25:18.195885    2019 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0930 03:25:18.203919    2019 out.go:177] * Done! kubectl is now configured to use "addons-584000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 30 10:35:06 addons-584000 dockerd[1293]: time="2024-09-30T10:35:06.255932010Z" level=warning msg="cleanup warnings time=\"2024-09-30T10:35:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 30 10:35:06 addons-584000 dockerd[1287]: time="2024-09-30T10:35:06.302597492Z" level=info msg="ignoring event" container=f19caa87011795e35f1d2e6dcc85decf2386ba3eb0589d9faa7b709e7f95369f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:06 addons-584000 dockerd[1293]: time="2024-09-30T10:35:06.302766487Z" level=info msg="shim disconnected" id=f19caa87011795e35f1d2e6dcc85decf2386ba3eb0589d9faa7b709e7f95369f namespace=moby
	Sep 30 10:35:06 addons-584000 dockerd[1293]: time="2024-09-30T10:35:06.302812695Z" level=warning msg="cleaning up after shim disconnected" id=f19caa87011795e35f1d2e6dcc85decf2386ba3eb0589d9faa7b709e7f95369f namespace=moby
	Sep 30 10:35:06 addons-584000 dockerd[1293]: time="2024-09-30T10:35:06.302817319Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:35:08 addons-584000 dockerd[1287]: time="2024-09-30T10:35:08.866580184Z" level=info msg="ignoring event" container=cfc6b01590fe5672cc500f2c13f6a99c3959d9dc1d79621177279353802048ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:08 addons-584000 dockerd[1293]: time="2024-09-30T10:35:08.867172794Z" level=info msg="shim disconnected" id=cfc6b01590fe5672cc500f2c13f6a99c3959d9dc1d79621177279353802048ff namespace=moby
	Sep 30 10:35:08 addons-584000 dockerd[1293]: time="2024-09-30T10:35:08.867209959Z" level=warning msg="cleaning up after shim disconnected" id=cfc6b01590fe5672cc500f2c13f6a99c3959d9dc1d79621177279353802048ff namespace=moby
	Sep 30 10:35:08 addons-584000 dockerd[1293]: time="2024-09-30T10:35:08.867214876Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1287]: time="2024-09-30T10:35:09.020679176Z" level=info msg="ignoring event" container=ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.020776465Z" level=info msg="shim disconnected" id=ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.020809006Z" level=warning msg="cleaning up after shim disconnected" id=ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.020813589Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.039029405Z" level=info msg="shim disconnected" id=da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.039142110Z" level=warning msg="cleaning up after shim disconnected" id=da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.039152235Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1287]: time="2024-09-30T10:35:09.039688012Z" level=info msg="ignoring event" container=da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.103980080Z" level=info msg="shim disconnected" id=59225deeb1fa6ca8a401f5f03b87430fc1f4dcd4f5cc244059cf8726f439a150 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.104012704Z" level=warning msg="cleaning up after shim disconnected" id=59225deeb1fa6ca8a401f5f03b87430fc1f4dcd4f5cc244059cf8726f439a150 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.104017246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1287]: time="2024-09-30T10:35:09.103854917Z" level=info msg="ignoring event" container=59225deeb1fa6ca8a401f5f03b87430fc1f4dcd4f5cc244059cf8726f439a150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.144131571Z" level=info msg="shim disconnected" id=13784ade905f964188b6e0f2e23a50ef464ed6d8fede7288f7db870abee76ae6 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1287]: time="2024-09-30T10:35:09.144213028Z" level=info msg="ignoring event" container=13784ade905f964188b6e0f2e23a50ef464ed6d8fede7288f7db870abee76ae6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.144273318Z" level=warning msg="cleaning up after shim disconnected" id=13784ade905f964188b6e0f2e23a50ef464ed6d8fede7288f7db870abee76ae6 namespace=moby
	Sep 30 10:35:09 addons-584000 dockerd[1293]: time="2024-09-30T10:35:09.144294775Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e740c9daba3ac       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   347ad20e5dbe5       gcp-auth-89d5ffd79-ss5lx
	1242ff897d2e5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	84f0cd104e246       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	2752ada979610       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	3eb132b5d809c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	fb7a6877e521e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	cecac3fc05afa       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   06d110e8beee5       ingress-nginx-controller-bc57996ff-thhxk
	e2803a34814c8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            11 minutes ago      Running             gadget                                   0                   2e7b0f132bc6c       gadget-8vdvd
	7b5ee12a8e674       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   2a9d0503e1e41       snapshot-controller-56fcc65765-tdrht
	488b20f1afd28       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   503dc0986a14d       snapshot-controller-56fcc65765-hvtbm
	dce64e0329042       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   6b06672c0e3b8       local-path-provisioner-86d989889c-k27gt
	03a4f281448cd       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   24aabb9a1abbb       csi-hostpath-resizer-0
	e0519f79eba02       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   f5cd48e6bbd0e       csi-hostpathplugin-q6gtd
	c8e6347385bd1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   71ba1cd8ccf06       csi-hostpath-attacher-0
	3131de7266ebd       420193b27261a                                                                                                                                13 minutes ago      Exited              patch                                    2                   3c0342cdf8537       ingress-nginx-admission-patch-grwfk
	2cc1893a45ff4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   13 minutes ago      Exited              create                                   0                   4deef74c0180b       ingress-nginx-admission-create-bqsgv
	53d346ab3c5a6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             13 minutes ago      Running             minikube-ingress-dns                     0                   5493217bdc546       kube-ingress-dns-minikube
	2b295300f71e0       ba04bb24b9575                                                                                                                                13 minutes ago      Running             storage-provisioner                      0                   7f85b977402b1       storage-provisioner
	4f8a68326fdb7       2f6c962e7b831                                                                                                                                13 minutes ago      Running             coredns                                  0                   cb1dc4abba93f       coredns-7c65d6cfc9-6nzmp
	6bd6f7185e7e2       24a140c548c07                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   fcbf076dba20a       kube-proxy-p4msl
	ef1dbc9b30b76       7f8aa378bb47d                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   312ce736c3704       kube-scheduler-addons-584000
	409757ee86c9b       279f381cb3736                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   3cca721b18f78       kube-controller-manager-addons-584000
	ebfb948bdaa05       27e3830e14027                                                                                                                                13 minutes ago      Running             etcd                                     0                   532edc36dde33       etcd-addons-584000
	4735d092b4dda       d3f53a98c0a9d                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   1115fd1f8a28f       kube-apiserver-addons-584000
	
	
	==> controller_ingress [cecac3fc05af] <==
	W0930 10:23:27.263529       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0930 10:23:27.263624       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0930 10:23:27.268795       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0930 10:23:27.407842       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0930 10:23:27.415428       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0930 10:23:27.420182       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0930 10:23:27.426557       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d420f866-d12a-4e61-8d93-fb20a5b37309", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0930 10:23:27.426756       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"00930060-64f6-42f2-aac7-9e94e8611fc3", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0930 10:23:27.426775       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"dd3c75c3-a892-4e7c-bc52-2c9781fedd35", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0930 10:23:28.621239       7 nginx.go:317] "Starting NGINX process"
	I0930 10:23:28.621428       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0930 10:23:28.622074       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0930 10:23:28.622441       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0930 10:23:28.629092       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0930 10:23:28.629663       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-thhxk"
	I0930 10:23:28.634216       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-thhxk" node="addons-584000"
	I0930 10:23:28.659967       7 controller.go:213] "Backend successfully reloaded"
	I0930 10:23:28.660107       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0930 10:23:28.660326       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-thhxk", UID:"cbe416ff-1c83-40a6-aaf4-30f5671eb9d8", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [4f8a68326fdb] <==
	[INFO] 127.0.0.1:60547 - 8609 "HINFO IN 636190059743791788.5018677158080771702. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013489651s
	[INFO] 10.244.0.11:35171 - 24649 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000126208s
	[INFO] 10.244.0.11:35171 - 59988 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.00015854s
	[INFO] 10.244.0.11:35171 - 64414 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000042625s
	[INFO] 10.244.0.11:35171 - 49421 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00007075s
	[INFO] 10.244.0.11:35171 - 45286 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000050875s
	[INFO] 10.244.0.11:35171 - 43922 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000042208s
	[INFO] 10.244.0.11:35171 - 24776 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000057333s
	[INFO] 10.244.0.11:35171 - 37325 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000028708s
	[INFO] 10.244.0.11:35447 - 15962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000034208s
	[INFO] 10.244.0.11:35447 - 15712 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070166s
	[INFO] 10.244.0.11:52254 - 9729 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000017708s
	[INFO] 10.244.0.11:52254 - 10194 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000016833s
	[INFO] 10.244.0.11:46977 - 32160 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000015542s
	[INFO] 10.244.0.11:46977 - 32499 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000016292s
	[INFO] 10.244.0.11:59286 - 29297 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000019458s
	[INFO] 10.244.0.11:59286 - 29637 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000468123s
	[INFO] 10.244.0.25:41865 - 40682 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118374s
	[INFO] 10.244.0.25:56696 - 43885 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000032333s
	[INFO] 10.244.0.25:38345 - 15458 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000054791s
	[INFO] 10.244.0.25:53370 - 28999 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133041s
	[INFO] 10.244.0.25:37808 - 31295 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000044208s
	[INFO] 10.244.0.25:52209 - 4185 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016479s
	[INFO] 10.244.0.25:60145 - 28405 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.000755744s
	[INFO] 10.244.0.25:58286 - 11058 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001254282s
	
	
	==> describe nodes <==
	Name:               addons-584000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-584000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-584000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T03_21_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-584000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-584000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:21:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-584000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:35:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:34:44 +0000   Mon, 30 Sep 2024 10:21:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:34:44 +0000   Mon, 30 Sep 2024 10:21:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:34:44 +0000   Mon, 30 Sep 2024 10:21:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:34:44 +0000   Mon, 30 Sep 2024 10:21:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-584000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 7042773636d24bf289a3f29a819c3d27
	  System UUID:                7042773636d24bf289a3f29a819c3d27
	  Boot ID:                    a6f316da-61ce-4985-af66-f003a1bf2202
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  gadget                      gadget-8vdvd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gcp-auth                    gcp-auth-89d5ffd79-ss5lx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-thhxk    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-6nzmp                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-q6gtd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-584000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-584000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-584000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-p4msl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-584000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-hvtbm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-tdrht        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-k27gt     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node addons-584000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node addons-584000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node addons-584000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node addons-584000 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-584000 event: Registered Node addons-584000 in Controller
	
	
	==> dmesg <==
	[  +6.142348] systemd-fstab-generator[2182]: Ignoring "noauto" option for root device
	[  +0.056431] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.403142] kauditd_printk_skb: 331 callbacks suppressed
	[Sep30 10:22] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.167220] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.507853] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.205345] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.382087] kauditd_printk_skb: 2 callbacks suppressed
	[Sep30 10:23] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.877971] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.185002] kauditd_printk_skb: 4 callbacks suppressed
	[Sep30 10:24] kauditd_printk_skb: 9 callbacks suppressed
	[Sep30 10:25] kauditd_printk_skb: 38 callbacks suppressed
	[ +16.154187] kauditd_printk_skb: 2 callbacks suppressed
	[ +29.721679] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.856727] kauditd_printk_skb: 18 callbacks suppressed
	[Sep30 10:26] kauditd_printk_skb: 2 callbacks suppressed
	[Sep30 10:34] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.974346] kauditd_printk_skb: 13 callbacks suppressed
	[ +13.814559] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.489978] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.508854] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.178293] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.411091] kauditd_printk_skb: 4 callbacks suppressed
	[Sep30 10:35] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [ebfb948bdaa0] <==
	{"level":"info","ts":"2024-09-30T10:21:34.231665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:34.231807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:34.231874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:34.231924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:34.231940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:34.231967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:34.231989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:34.234444Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-584000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:21:34.234471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:34.234575Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:34.234519Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:34.237879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:34.241304Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:34.241333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-30T10:21:34.242004Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:34.242066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:34.242330Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:34.242552Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:34.242729Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:34.243656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:22:01.490664Z","caller":"traceutil/trace.go:171","msg":"trace[909313776] transaction","detail":"{read_only:false; response_revision:967; number_of_response:1; }","duration":"102.896458ms","start":"2024-09-30T10:22:01.387758Z","end":"2024-09-30T10:22:01.490654Z","steps":["trace[909313776] 'process raft request'  (duration: 102.726583ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:23:13.187322Z","caller":"traceutil/trace.go:171","msg":"trace[2119525116] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"102.9951ms","start":"2024-09-30T10:23:13.084318Z","end":"2024-09-30T10:23:13.187313Z","steps":["trace[2119525116] 'process raft request'  (duration: 102.939725ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:31:34.818882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1802}
	{"level":"info","ts":"2024-09-30T10:31:34.915704Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1802,"took":"93.243682ms","hash":3563879530,"current-db-size-bytes":8822784,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4673536,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-30T10:31:34.916266Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3563879530,"revision":1802,"compact-revision":-1}
	
	
	==> gcp-auth [e740c9daba3a] <==
	2024/09/30 10:25:33 Ready to write response ...
	2024/09/30 10:25:34 Ready to marshal response ...
	2024/09/30 10:25:34 Ready to write response ...
	2024/09/30 10:25:57 Ready to marshal response ...
	2024/09/30 10:25:57 Ready to write response ...
	2024/09/30 10:25:57 Ready to marshal response ...
	2024/09/30 10:25:57 Ready to write response ...
	2024/09/30 10:25:57 Ready to marshal response ...
	2024/09/30 10:25:57 Ready to write response ...
	2024/09/30 10:33:58 Ready to marshal response ...
	2024/09/30 10:33:58 Ready to write response ...
	2024/09/30 10:33:58 Ready to marshal response ...
	2024/09/30 10:33:58 Ready to write response ...
	2024/09/30 10:33:58 Ready to marshal response ...
	2024/09/30 10:33:58 Ready to write response ...
	2024/09/30 10:34:08 Ready to marshal response ...
	2024/09/30 10:34:08 Ready to write response ...
	2024/09/30 10:34:29 Ready to marshal response ...
	2024/09/30 10:34:29 Ready to write response ...
	2024/09/30 10:34:29 Ready to marshal response ...
	2024/09/30 10:34:29 Ready to write response ...
	2024/09/30 10:34:39 Ready to marshal response ...
	2024/09/30 10:34:39 Ready to write response ...
	2024/09/30 10:34:55 Ready to marshal response ...
	2024/09/30 10:34:55 Ready to write response ...
	
	
	==> kernel <==
	 10:35:09 up 13 min,  0 users,  load average: 0.30, 0.50, 0.40
	Linux addons-584000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4735d092b4dd] <==
	E0930 10:24:35.009006       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.161.158:443: connect: connection refused" logger="UnhandledError"
	W0930 10:24:54.166092       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.161.158:443: connect: connection refused
	E0930 10:24:54.166265       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.161.158:443: connect: connection refused" logger="UnhandledError"
	W0930 10:24:54.166092       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.161.158:443: connect: connection refused
	E0930 10:24:54.166356       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.161.158:443: connect: connection refused" logger="UnhandledError"
	I0930 10:25:33.547957       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0930 10:25:33.561116       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0930 10:25:46.900165       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0930 10:25:46.920166       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0930 10:25:47.096803       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:25:47.106776       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:25:47.120273       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0930 10:25:47.137409       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:25:47.315072       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:25:47.403978       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:25:47.538322       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0930 10:25:48.065375       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0930 10:25:48.138666       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0930 10:25:48.397670       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0930 10:25:48.416331       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0930 10:25:48.416334       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0930 10:25:48.538701       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0930 10:25:48.607409       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0930 10:33:58.920955       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.241.140"}
	I0930 10:35:05.064186       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [409757ee86c9] <==
	W0930 10:34:16.202944       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:16.203048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:19.241160       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0930 10:34:19.442689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.417µs"
	I0930 10:34:29.506468       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0930 10:34:32.720901       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:32.721003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:35.126727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:35.126783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:37.203750       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:37.203771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:44.446226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="3.542µs"
	I0930 10:34:44.964853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-584000"
	W0930 10:34:46.992392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:46.992474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:49.699244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="1.75µs"
	W0930 10:34:50.622817       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:50.623225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:59.179591       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:59.179708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:35:01.345100       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:35:01.345200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:35:06.674301       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:35:06.674357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:35:08.983836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.333µs"
	
	
	==> kube-proxy [6bd6f7185e7e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 10:21:43.137278       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 10:21:43.147256       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0930 10:21:43.147291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:21:43.207194       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 10:21:43.207217       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 10:21:43.207232       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:21:43.210761       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:21:43.211028       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:21:43.211037       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:21:43.215247       1 config.go:199] "Starting service config controller"
	I0930 10:21:43.215262       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:21:43.215277       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:21:43.215289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:21:43.216311       1 config.go:328] "Starting node config controller"
	I0930 10:21:43.216316       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:21:43.315870       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:21:43.315919       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:21:43.316428       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ef1dbc9b30b7] <==
	W0930 10:21:34.790658       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 10:21:34.790892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.790699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 10:21:34.790964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.790709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 10:21:34.791014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.790736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 10:21:34.791057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.790745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 10:21:34.791101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.790767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 10:21:34.791131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.791247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:34.791262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.791322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:34.791337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:34.791377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:21:34.791389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:35.661156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:35.661207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:35.706808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 10:21:35.706869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:35.805248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:35.805274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 10:21:35.988780       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:35:07 addons-584000 kubelet[2054]: I0930 10:35:07.248874    2054 scope.go:117] "RemoveContainer" containerID="ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96"
	Sep 30 10:35:07 addons-584000 kubelet[2054]: I0930 10:35:07.259855    2054 scope.go:117] "RemoveContainer" containerID="ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96"
	Sep 30 10:35:07 addons-584000 kubelet[2054]: E0930 10:35:07.262694    2054 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96" containerID="ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96"
	Sep 30 10:35:07 addons-584000 kubelet[2054]: I0930 10:35:07.262716    2054 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96"} err="failed to get container status \"ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96\": rpc error: code = Unknown desc = Error response from daemon: No such container: ffea0617ad40e01be57b9d06dfcc26a5089f6a8768326e5e8bd97ecfb3642a96"
	Sep 30 10:35:08 addons-584000 kubelet[2054]: I0930 10:35:08.910462    2054 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62h4\" (UniqueName: \"kubernetes.io/projected/83128d96-6515-4413-bf53-c2e02e81496a-kube-api-access-q62h4\") pod \"83128d96-6515-4413-bf53-c2e02e81496a\" (UID: \"83128d96-6515-4413-bf53-c2e02e81496a\") "
	Sep 30 10:35:08 addons-584000 kubelet[2054]: I0930 10:35:08.910484    2054 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83128d96-6515-4413-bf53-c2e02e81496a-gcp-creds\") pod \"83128d96-6515-4413-bf53-c2e02e81496a\" (UID: \"83128d96-6515-4413-bf53-c2e02e81496a\") "
	Sep 30 10:35:08 addons-584000 kubelet[2054]: I0930 10:35:08.910523    2054 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83128d96-6515-4413-bf53-c2e02e81496a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "83128d96-6515-4413-bf53-c2e02e81496a" (UID: "83128d96-6515-4413-bf53-c2e02e81496a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:35:08 addons-584000 kubelet[2054]: I0930 10:35:08.914645    2054 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83128d96-6515-4413-bf53-c2e02e81496a-kube-api-access-q62h4" (OuterVolumeSpecName: "kube-api-access-q62h4") pod "83128d96-6515-4413-bf53-c2e02e81496a" (UID: "83128d96-6515-4413-bf53-c2e02e81496a"). InnerVolumeSpecName "kube-api-access-q62h4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.011017    2054 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83128d96-6515-4413-bf53-c2e02e81496a-gcp-creds\") on node \"addons-584000\" DevicePath \"\""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.011037    2054 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q62h4\" (UniqueName: \"kubernetes.io/projected/83128d96-6515-4413-bf53-c2e02e81496a-kube-api-access-q62h4\") on node \"addons-584000\" DevicePath \"\""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.205213    2054 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0587fb-e523-4b3a-8bcc-3c24079f8b81" path="/var/lib/kubelet/pods/5c0587fb-e523-4b3a-8bcc-3c24079f8b81/volumes"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.297035    2054 scope.go:117] "RemoveContainer" containerID="da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.313743    2054 scope.go:117] "RemoveContainer" containerID="da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.314421    2054 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjzh4\" (UniqueName: \"kubernetes.io/projected/151b7d8c-f9bc-4089-a54a-897445c55163-kube-api-access-qjzh4\") pod \"151b7d8c-f9bc-4089-a54a-897445c55163\" (UID: \"151b7d8c-f9bc-4089-a54a-897445c55163\") "
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.314444    2054 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlq54\" (UniqueName: \"kubernetes.io/projected/230e32a5-8b5f-413f-b994-093070028d06-kube-api-access-mlq54\") pod \"230e32a5-8b5f-413f-b994-093070028d06\" (UID: \"230e32a5-8b5f-413f-b994-093070028d06\") "
	Sep 30 10:35:09 addons-584000 kubelet[2054]: E0930 10:35:09.315783    2054 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635" containerID="da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.315799    2054 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635"} err="failed to get container status \"da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635\": rpc error: code = Unknown desc = Error response from daemon: No such container: da831625faff590e81432de75ebf7fc0940330d674e18a1f55765f71c88f4635"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.315810    2054 scope.go:117] "RemoveContainer" containerID="ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.316427    2054 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230e32a5-8b5f-413f-b994-093070028d06-kube-api-access-mlq54" (OuterVolumeSpecName: "kube-api-access-mlq54") pod "230e32a5-8b5f-413f-b994-093070028d06" (UID: "230e32a5-8b5f-413f-b994-093070028d06"). InnerVolumeSpecName "kube-api-access-mlq54". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.316464    2054 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151b7d8c-f9bc-4089-a54a-897445c55163-kube-api-access-qjzh4" (OuterVolumeSpecName: "kube-api-access-qjzh4") pod "151b7d8c-f9bc-4089-a54a-897445c55163" (UID: "151b7d8c-f9bc-4089-a54a-897445c55163"). InnerVolumeSpecName "kube-api-access-qjzh4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.327158    2054 scope.go:117] "RemoveContainer" containerID="ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: E0930 10:35:09.327535    2054 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6" containerID="ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.327556    2054 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6"} err="failed to get container status \"ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6\": rpc error: code = Unknown desc = Error response from daemon: No such container: ba9cb3df61e56578881270f32ff5cda871d2f9fb22af68eb12e4762d487e65a6"
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.415014    2054 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qjzh4\" (UniqueName: \"kubernetes.io/projected/151b7d8c-f9bc-4089-a54a-897445c55163-kube-api-access-qjzh4\") on node \"addons-584000\" DevicePath \"\""
	Sep 30 10:35:09 addons-584000 kubelet[2054]: I0930 10:35:09.415036    2054 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mlq54\" (UniqueName: \"kubernetes.io/projected/230e32a5-8b5f-413f-b994-093070028d06-kube-api-access-mlq54\") on node \"addons-584000\" DevicePath \"\""
	
	
	==> storage-provisioner [2b295300f71e] <==
	I0930 10:21:45.413012       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:21:45.551256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:21:45.583198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:21:45.602183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:21:45.602311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-584000_c835b1f5-9033-4637-a92e-3dc80ebc88fc!
	I0930 10:21:45.602877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"01666593-ffe5-4916-84f5-8db96cebec5f", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-584000_c835b1f5-9033-4637-a92e-3dc80ebc88fc became leader
	I0930 10:21:45.702403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-584000_c835b1f5-9033-4637-a92e-3dc80ebc88fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-584000 -n addons-584000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-584000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-bqsgv ingress-nginx-admission-patch-grwfk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-584000 describe pod busybox ingress-nginx-admission-create-bqsgv ingress-nginx-admission-patch-grwfk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-584000 describe pod busybox ingress-nginx-admission-create-bqsgv ingress-nginx-admission-patch-grwfk: exit status 1 (40.358458ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-584000/192.168.105.2
	Start Time:       Mon, 30 Sep 2024 03:25:57 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8rds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p8rds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-584000
	  Normal   Pulling    7m47s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m11s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m20s (x6 over 9m10s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x20 over 9m10s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bqsgv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-grwfk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-584000 describe pod busybox ingress-nginx-admission-create-bqsgv ingress-nginx-admission-patch-grwfk: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-474000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-474000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.850515833s)

                                                
                                                
-- stdout --
	* [cert-options-474000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-474000" primary control-plane node in "cert-options-474000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-474000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-474000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-474000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-474000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-474000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.478916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-474000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-474000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-474000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-474000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-474000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-474000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.190125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-474000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-474000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-474000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-474000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-474000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-30 04:02:58.552546 -0700 PDT m=+2574.266197043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-474000 -n cert-options-474000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-474000 -n cert-options-474000: exit status 7 (30.416833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-474000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-474000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.030342916s)

                                                
                                                
-- stdout --
	* [cert-expiration-565000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-565000" primary control-plane node in "cert-expiration-565000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.191771708s)

                                                
                                                
-- stdout --
	* [cert-expiration-565000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-565000" primary control-plane node in "cert-expiration-565000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-565000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-565000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-565000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-565000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-565000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-565000" primary control-plane node in "cert-expiration-565000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-565000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-565000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-565000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-30 04:05:58.608334 -0700 PDT m=+2754.324557793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-565000 -n cert-expiration-565000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-565000 -n cert-expiration-565000: exit status 7 (55.631084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-565000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-565000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.16s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-602000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-602000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.922726041s)

                                                
                                                
-- stdout --
	* [docker-flags-602000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-602000" primary control-plane node in "docker-flags-602000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-602000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:02:38.412352    4828 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:02:38.412475    4828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:38.412478    4828 out.go:358] Setting ErrFile to fd 2...
	I0930 04:02:38.412481    4828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:38.412605    4828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:02:38.413697    4828 out.go:352] Setting JSON to false
	I0930 04:02:38.429633    4828 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3721,"bootTime":1727690437,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:02:38.429700    4828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:02:38.433346    4828 out.go:177] * [docker-flags-602000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:02:38.441362    4828 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:02:38.441414    4828 notify.go:220] Checking for updates...
	I0930 04:02:38.448302    4828 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:02:38.451381    4828 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:02:38.454289    4828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:02:38.457324    4828 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:02:38.460360    4828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:02:38.463670    4828 config.go:182] Loaded profile config "force-systemd-flag-910000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:38.463735    4828 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:38.463783    4828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:02:38.468338    4828 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:02:38.475322    4828 start.go:297] selected driver: qemu2
	I0930 04:02:38.475327    4828 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:02:38.475334    4828 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:02:38.477470    4828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:02:38.480315    4828 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:02:38.483424    4828 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0930 04:02:38.483448    4828 cni.go:84] Creating CNI manager for ""
	I0930 04:02:38.483481    4828 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:02:38.483485    4828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:02:38.483508    4828 start.go:340] cluster config:
	{Name:docker-flags-602000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-602000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:02:38.487092    4828 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:02:38.490340    4828 out.go:177] * Starting "docker-flags-602000" primary control-plane node in "docker-flags-602000" cluster
	I0930 04:02:38.498356    4828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:02:38.498375    4828 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:02:38.498386    4828 cache.go:56] Caching tarball of preloaded images
	I0930 04:02:38.498448    4828 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:02:38.498454    4828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:02:38.498534    4828 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/docker-flags-602000/config.json ...
	I0930 04:02:38.498545    4828 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/docker-flags-602000/config.json: {Name:mka2b7a39086316a9a9a686b2bc28b38cf36efda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:02:38.498780    4828 start.go:360] acquireMachinesLock for docker-flags-602000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:38.498819    4828 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "docker-flags-602000"
	I0930 04:02:38.498832    4828 start.go:93] Provisioning new machine with config: &{Name:docker-flags-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-602000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:38.498865    4828 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:38.503369    4828 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:38.521705    4828 start.go:159] libmachine.API.Create for "docker-flags-602000" (driver="qemu2")
	I0930 04:02:38.521732    4828 client.go:168] LocalClient.Create starting
	I0930 04:02:38.521793    4828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:38.521829    4828 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:38.521838    4828 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:38.521881    4828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:38.521908    4828 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:38.521915    4828 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:38.522274    4828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:38.682861    4828 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:38.737670    4828 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:38.737679    4828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:38.737851    4828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:38.746967    4828 main.go:141] libmachine: STDOUT: 
	I0930 04:02:38.746988    4828 main.go:141] libmachine: STDERR: 
	I0930 04:02:38.747047    4828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2 +20000M
	I0930 04:02:38.755052    4828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:38.755180    4828 main.go:141] libmachine: STDERR: 
	I0930 04:02:38.755194    4828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:38.755199    4828 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:38.755211    4828 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:38.755243    4828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b0:5d:54:ff:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:38.756922    4828 main.go:141] libmachine: STDOUT: 
	I0930 04:02:38.756975    4828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:38.756996    4828 client.go:171] duration metric: took 235.261959ms to LocalClient.Create
	I0930 04:02:40.759082    4828 start.go:128] duration metric: took 2.260236834s to createHost
	I0930 04:02:40.759110    4828 start.go:83] releasing machines lock for "docker-flags-602000", held for 2.26031675s
	W0930 04:02:40.759145    4828 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:40.773784    4828 out.go:177] * Deleting "docker-flags-602000" in qemu2 ...
	W0930 04:02:40.784637    4828 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:40.784647    4828 start.go:729] Will try again in 5 seconds ...
	I0930 04:02:45.786776    4828 start.go:360] acquireMachinesLock for docker-flags-602000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:45.835094    4828 start.go:364] duration metric: took 48.203792ms to acquireMachinesLock for "docker-flags-602000"
	I0930 04:02:45.835265    4828 start.go:93] Provisioning new machine with config: &{Name:docker-flags-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-602000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:45.835509    4828 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:45.852258    4828 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:45.901738    4828 start.go:159] libmachine.API.Create for "docker-flags-602000" (driver="qemu2")
	I0930 04:02:45.901781    4828 client.go:168] LocalClient.Create starting
	I0930 04:02:45.901918    4828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:45.901983    4828 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:45.902009    4828 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:45.902068    4828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:45.902114    4828 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:45.902133    4828 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:45.902746    4828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:46.084408    4828 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:46.222637    4828 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:46.222643    4828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:46.222869    4828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:46.232653    4828 main.go:141] libmachine: STDOUT: 
	I0930 04:02:46.232674    4828 main.go:141] libmachine: STDERR: 
	I0930 04:02:46.232725    4828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2 +20000M
	I0930 04:02:46.240481    4828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:46.240498    4828 main.go:141] libmachine: STDERR: 
	I0930 04:02:46.240512    4828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:46.240517    4828 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:46.240533    4828 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:46.240567    4828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:21:e8:82:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/docker-flags-602000/disk.qcow2
	I0930 04:02:46.242211    4828 main.go:141] libmachine: STDOUT: 
	I0930 04:02:46.242229    4828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:46.242240    4828 client.go:171] duration metric: took 340.457834ms to LocalClient.Create
	I0930 04:02:48.244382    4828 start.go:128] duration metric: took 2.408878208s to createHost
	I0930 04:02:48.244443    4828 start.go:83] releasing machines lock for "docker-flags-602000", held for 2.409357833s
	W0930 04:02:48.244765    4828 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-602000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-602000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:48.259620    4828 out.go:201] 
	W0930 04:02:48.277684    4828 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:02:48.277717    4828 out.go:270] * 
	* 
	W0930 04:02:48.279496    4828 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:02:48.291503    4828 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-602000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-602000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-602000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.841166ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-602000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-602000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-602000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-602000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-602000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-602000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-602000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-602000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-602000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.673041ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-602000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-602000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-602000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-602000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-602000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-602000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-30 04:02:48.434411 -0700 PDT m=+2564.147917835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-602000 -n docker-flags-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-602000 -n docker-flags-602000: exit status 7 (29.557833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-602000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-602000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-602000
--- FAIL: TestDockerFlags (10.16s)

                                                
                                    
x
+
TestForceSystemdFlag (10.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-910000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-910000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.866034541s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-910000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-910000" primary control-plane node in "force-systemd-flag-910000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-910000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:02:33.357336    4807 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:02:33.357488    4807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:33.357492    4807 out.go:358] Setting ErrFile to fd 2...
	I0930 04:02:33.357495    4807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:33.357636    4807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:02:33.358643    4807 out.go:352] Setting JSON to false
	I0930 04:02:33.374721    4807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3716,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:02:33.374787    4807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:02:33.381596    4807 out.go:177] * [force-systemd-flag-910000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:02:33.405590    4807 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:02:33.405619    4807 notify.go:220] Checking for updates...
	I0930 04:02:33.417596    4807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:02:33.421554    4807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:02:33.424581    4807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:02:33.427711    4807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:02:33.430466    4807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:02:33.433882    4807 config.go:182] Loaded profile config "force-systemd-env-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:33.433964    4807 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:33.434021    4807 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:02:33.438594    4807 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:02:33.445567    4807 start.go:297] selected driver: qemu2
	I0930 04:02:33.445575    4807 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:02:33.445583    4807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:02:33.448308    4807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:02:33.451540    4807 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:02:33.453091    4807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 04:02:33.453106    4807 cni.go:84] Creating CNI manager for ""
	I0930 04:02:33.453140    4807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:02:33.453147    4807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:02:33.453192    4807 start.go:340] cluster config:
	{Name:force-systemd-flag-910000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:02:33.457482    4807 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:02:33.464594    4807 out.go:177] * Starting "force-systemd-flag-910000" primary control-plane node in "force-systemd-flag-910000" cluster
	I0930 04:02:33.468546    4807 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:02:33.468566    4807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:02:33.468575    4807 cache.go:56] Caching tarball of preloaded images
	I0930 04:02:33.468643    4807 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:02:33.468650    4807 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:02:33.468720    4807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/force-systemd-flag-910000/config.json ...
	I0930 04:02:33.468732    4807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/force-systemd-flag-910000/config.json: {Name:mkb65eb0c6cdbc0528c1b0135b6495dbb5eff0cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:02:33.468985    4807 start.go:360] acquireMachinesLock for force-systemd-flag-910000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:33.469028    4807 start.go:364] duration metric: took 33.459µs to acquireMachinesLock for "force-systemd-flag-910000"
	I0930 04:02:33.469043    4807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:33.469074    4807 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:33.477587    4807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:33.498282    4807 start.go:159] libmachine.API.Create for "force-systemd-flag-910000" (driver="qemu2")
	I0930 04:02:33.498312    4807 client.go:168] LocalClient.Create starting
	I0930 04:02:33.498384    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:33.498418    4807 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:33.498432    4807 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:33.498482    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:33.498515    4807 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:33.498522    4807 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:33.498962    4807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:33.660091    4807 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:33.760926    4807 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:33.760933    4807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:33.761121    4807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:33.770232    4807 main.go:141] libmachine: STDOUT: 
	I0930 04:02:33.770257    4807 main.go:141] libmachine: STDERR: 
	I0930 04:02:33.770332    4807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2 +20000M
	I0930 04:02:33.778133    4807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:33.778151    4807 main.go:141] libmachine: STDERR: 
	I0930 04:02:33.778163    4807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:33.778168    4807 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:33.778180    4807 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:33.778212    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:08:ef:84:75:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:33.779844    4807 main.go:141] libmachine: STDOUT: 
	I0930 04:02:33.779861    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:33.779886    4807 client.go:171] duration metric: took 281.573458ms to LocalClient.Create
	I0930 04:02:35.782078    4807 start.go:128] duration metric: took 2.312990083s to createHost
	I0930 04:02:35.782120    4807 start.go:83] releasing machines lock for "force-systemd-flag-910000", held for 2.313114667s
	W0930 04:02:35.782196    4807 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:35.811393    4807 out.go:177] * Deleting "force-systemd-flag-910000" in qemu2 ...
	W0930 04:02:35.837519    4807 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:35.837547    4807 start.go:729] Will try again in 5 seconds ...
	I0930 04:02:40.839592    4807 start.go:360] acquireMachinesLock for force-systemd-flag-910000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:40.839710    4807 start.go:364] duration metric: took 85.125µs to acquireMachinesLock for "force-systemd-flag-910000"
	I0930 04:02:40.839739    4807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:40.839775    4807 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:40.848787    4807 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:40.864476    4807 start.go:159] libmachine.API.Create for "force-systemd-flag-910000" (driver="qemu2")
	I0930 04:02:40.864507    4807 client.go:168] LocalClient.Create starting
	I0930 04:02:40.864570    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:40.864605    4807 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:40.864614    4807 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:40.864644    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:40.864667    4807 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:40.864674    4807 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:40.870298    4807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:41.037768    4807 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:41.123299    4807 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:41.123307    4807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:41.123496    4807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:41.133147    4807 main.go:141] libmachine: STDOUT: 
	I0930 04:02:41.133163    4807 main.go:141] libmachine: STDERR: 
	I0930 04:02:41.133220    4807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2 +20000M
	I0930 04:02:41.141032    4807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:41.141056    4807 main.go:141] libmachine: STDERR: 
	I0930 04:02:41.141069    4807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:41.141074    4807 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:41.141080    4807 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:41.141118    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:dc:0a:23:81:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-flag-910000/disk.qcow2
	I0930 04:02:41.142813    4807 main.go:141] libmachine: STDOUT: 
	I0930 04:02:41.142826    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:41.142839    4807 client.go:171] duration metric: took 278.332375ms to LocalClient.Create
	I0930 04:02:43.144985    4807 start.go:128] duration metric: took 2.305216s to createHost
	I0930 04:02:43.145037    4807 start.go:83] releasing machines lock for "force-systemd-flag-910000", held for 2.305349334s
	W0930 04:02:43.145441    4807 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-910000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-910000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:43.162164    4807 out.go:201] 
	W0930 04:02:43.165152    4807 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:02:43.165175    4807 out.go:270] * 
	* 
	W0930 04:02:43.167821    4807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:02:43.182028    4807 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-910000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-910000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-910000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.484084ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-910000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-910000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-910000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-30 04:02:43.27862 -0700 PDT m=+2558.992053001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-910000 -n force-systemd-flag-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-910000 -n force-systemd-flag-910000: exit status 7 (32.645833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-910000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-910000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-910000
--- FAIL: TestForceSystemdFlag (10.06s)

                                                
                                    
x
+
TestForceSystemdEnv (11.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-516000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0930 04:02:28.399873    1929 install.go:79] stdout: 
W0930 04:02:28.400060    1929 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0930 04:02:28.400089    1929 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit]
I0930 04:02:28.414697    1929 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit]
I0930 04:02:28.424872    1929 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit]
I0930 04:02:28.434004    1929 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit]
I0930 04:02:28.450930    1929 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 04:02:28.451067    1929 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0930 04:02:30.235813    1929 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0930 04:02:30.235833    1929 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0930 04:02:30.235880    1929 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0930 04:02:30.235911    1929 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit
I0930 04:02:30.627228    1929 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40] Decompressors:map[bz2:0x14000121600 gz:0x14000121608 tar:0x140001215a0 tar.bz2:0x140001215b0 tar.gz:0x140001215c0 tar.xz:0x140001215d0 tar.zst:0x140001215e0 tbz2:0x140001215b0 tgz:0x140001215c0 txz:0x140001215d0 tzst:0x140001215e0 xz:0x14000121610 zip:0x14000121620 zst:0x14000121618] Getters:map[file:0x14001732300 http:0x140006e16d0 https:0x140006e1720] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0930 04:02:30.627393    1929 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit
I0930 04:02:33.285128    1929 install.go:79] stdout: 
W0930 04:02:33.285276    1929 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0930 04:02:33.285304    1929 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit]
I0930 04:02:33.299265    1929 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit]
I0930 04:02:33.310321    1929 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit]
I0930 04:02:33.319290    1929 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-516000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.251603167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-516000" primary control-plane node in "force-systemd-env-516000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:02:26.945561    4775 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:02:26.945716    4775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:26.945719    4775 out.go:358] Setting ErrFile to fd 2...
	I0930 04:02:26.945721    4775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:02:26.945846    4775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:02:26.946949    4775 out.go:352] Setting JSON to false
	I0930 04:02:26.963008    4775 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3709,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:02:26.963077    4775 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:02:26.970709    4775 out.go:177] * [force-systemd-env-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:02:26.978639    4775 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:02:26.978724    4775 notify.go:220] Checking for updates...
	I0930 04:02:26.986551    4775 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:02:26.989599    4775 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:02:26.992512    4775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:02:26.995546    4775 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:02:26.998626    4775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0930 04:02:27.002016    4775 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:02:27.002066    4775 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:02:27.006577    4775 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:02:27.013477    4775 start.go:297] selected driver: qemu2
	I0930 04:02:27.013483    4775 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:02:27.013490    4775 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:02:27.015721    4775 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:02:27.018571    4775 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:02:27.021702    4775 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 04:02:27.021728    4775 cni.go:84] Creating CNI manager for ""
	I0930 04:02:27.021765    4775 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:02:27.021769    4775 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:02:27.021804    4775 start.go:340] cluster config:
	{Name:force-systemd-env-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:02:27.025572    4775 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:02:27.033553    4775 out.go:177] * Starting "force-systemd-env-516000" primary control-plane node in "force-systemd-env-516000" cluster
	I0930 04:02:27.037603    4775 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:02:27.037622    4775 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:02:27.037632    4775 cache.go:56] Caching tarball of preloaded images
	I0930 04:02:27.037732    4775 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:02:27.037738    4775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:02:27.037798    4775 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/force-systemd-env-516000/config.json ...
	I0930 04:02:27.037809    4775 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/force-systemd-env-516000/config.json: {Name:mk126083d660c5dca77cbf98ddc83362894a8ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:02:27.038050    4775 start.go:360] acquireMachinesLock for force-systemd-env-516000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:27.038089    4775 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "force-systemd-env-516000"
	I0930 04:02:27.038103    4775 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:27.038138    4775 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:27.041653    4775 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:27.060983    4775 start.go:159] libmachine.API.Create for "force-systemd-env-516000" (driver="qemu2")
	I0930 04:02:27.061014    4775 client.go:168] LocalClient.Create starting
	I0930 04:02:27.061086    4775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:27.061117    4775 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:27.061127    4775 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:27.061180    4775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:27.061210    4775 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:27.061220    4775 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:27.061567    4775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:27.224173    4775 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:27.458739    4775 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:27.458750    4775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:27.458985    4775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:27.468829    4775 main.go:141] libmachine: STDOUT: 
	I0930 04:02:27.468847    4775 main.go:141] libmachine: STDERR: 
	I0930 04:02:27.468911    4775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2 +20000M
	I0930 04:02:27.477112    4775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:27.477126    4775 main.go:141] libmachine: STDERR: 
	I0930 04:02:27.477141    4775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:27.477147    4775 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:27.477158    4775 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:27.477187    4775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9b:63:89:bc:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:27.478901    4775 main.go:141] libmachine: STDOUT: 
	I0930 04:02:27.478915    4775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:27.478934    4775 client.go:171] duration metric: took 417.91875ms to LocalClient.Create
	I0930 04:02:29.479643    4775 start.go:128] duration metric: took 2.441523709s to createHost
	I0930 04:02:29.479669    4775 start.go:83] releasing machines lock for "force-systemd-env-516000", held for 2.441609667s
	W0930 04:02:29.479686    4775 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:29.498223    4775 out.go:177] * Deleting "force-systemd-env-516000" in qemu2 ...
	W0930 04:02:29.512437    4775 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:29.512444    4775 start.go:729] Will try again in 5 seconds ...
	I0930 04:02:34.514696    4775 start.go:360] acquireMachinesLock for force-systemd-env-516000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:02:35.782269    4775 start.go:364] duration metric: took 1.267480792s to acquireMachinesLock for "force-systemd-env-516000"
	I0930 04:02:35.782397    4775 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:02:35.782690    4775 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:02:35.798300    4775 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0930 04:02:35.846411    4775 start.go:159] libmachine.API.Create for "force-systemd-env-516000" (driver="qemu2")
	I0930 04:02:35.846479    4775 client.go:168] LocalClient.Create starting
	I0930 04:02:35.846602    4775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:02:35.846664    4775 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:35.846682    4775 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:35.846748    4775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:02:35.846796    4775 main.go:141] libmachine: Decoding PEM data...
	I0930 04:02:35.846807    4775 main.go:141] libmachine: Parsing certificate...
	I0930 04:02:35.849333    4775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:02:36.035423    4775 main.go:141] libmachine: Creating SSH key...
	I0930 04:02:36.078268    4775 main.go:141] libmachine: Creating Disk image...
	I0930 04:02:36.078273    4775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:02:36.078474    4775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:36.087786    4775 main.go:141] libmachine: STDOUT: 
	I0930 04:02:36.087806    4775 main.go:141] libmachine: STDERR: 
	I0930 04:02:36.087864    4775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2 +20000M
	I0930 04:02:36.095699    4775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:02:36.095716    4775 main.go:141] libmachine: STDERR: 
	I0930 04:02:36.095739    4775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:36.095743    4775 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:02:36.095751    4775 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:02:36.095780    4775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e5:27:e3:a2:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/force-systemd-env-516000/disk.qcow2
	I0930 04:02:36.097529    4775 main.go:141] libmachine: STDOUT: 
	I0930 04:02:36.097543    4775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:02:36.097555    4775 client.go:171] duration metric: took 251.074208ms to LocalClient.Create
	I0930 04:02:38.099340    4775 start.go:128] duration metric: took 2.316620917s to createHost
	I0930 04:02:38.099411    4775 start.go:83] releasing machines lock for "force-systemd-env-516000", held for 2.317109042s
	W0930 04:02:38.099771    4775 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:02:38.130146    4775 out.go:201] 
	W0930 04:02:38.140517    4775 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:02:38.140571    4775 out.go:270] * 
	* 
	W0930 04:02:38.143264    4775 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:02:38.152329    4775 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-516000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-516000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-516000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.169917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-516000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-516000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-516000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-30 04:02:38.251532 -0700 PDT m=+2553.964893251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-516000 -n force-systemd-env-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-516000 -n force-systemd-env-516000: exit status 7 (35.144375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-516000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-516000
--- FAIL: TestForceSystemdEnv (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-853000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-853000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h8jms" [1d30b919-afce-4e3a-b478-3dfe2404ae59] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-h8jms" [1d30b919-afce-4e3a-b478-3dfe2404ae59] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.007275667s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31811
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:40:57.018653    1929 retry.go:31] will retry after 779.188294ms: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:40:57.801828    1929 retry.go:31] will retry after 959.307964ms: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:40:58.765155    1929 retry.go:31] will retry after 2.453005577s: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
E0930 03:40:59.222030    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:41:01.222175    1929 retry.go:31] will retry after 2.717354131s: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:41:03.943271    1929 retry.go:31] will retry after 4.937055058s: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
I0930 03:41:08.883154    1929 retry.go:31] will retry after 8.451209959s: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31811: Get "http://192.168.105.4:31811": dial tcp 192.168.105.4:31811: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-853000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-h8jms
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-853000/192.168.105.4
Start Time:       Mon, 30 Sep 2024 03:40:42 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://8a4438e280bf2ee872d3961af0c3c22546c58408bafad96b1408aa88a1eac083
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 30 Sep 2024 03:41:07 -0700
Finished:     Mon, 30 Sep 2024 03:41:07 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 30 Sep 2024 03:40:50 -0700
Finished:     Mon, 30 Sep 2024 03:40:50 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2c7r7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2c7r7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-h8jms to functional-853000
Normal   Pulling    34s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.734s (6.632s including waiting). Image size: 84957542 bytes.
Normal   Created    10s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    10s (x3 over 26s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-h8jms_default(1d30b919-afce-4e3a-b478-3dfe2404ae59)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-853000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-853000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.194.8
IPs:                      10.100.194.8
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31811/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-853000 -n functional-853000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-853000 image save                                                                                         | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | kicbase/echo-server:functional-853000                                                                                |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image   | functional-853000 image rm                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | kicbase/echo-server:functional-853000                                                                                |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image   | functional-853000 image ls                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	| image   | functional-853000 image load                                                                                         | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image   | functional-853000 image ls                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	| image   | functional-853000 image save --daemon                                                                                | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | kicbase/echo-server:functional-853000                                                                                |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| addons  | functional-853000 addons list                                                                                        | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	| addons  | functional-853000 addons list                                                                                        | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-853000 service                                                                                            | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:40 PDT | 30 Sep 24 03:40 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-853000 service list                                                                                       | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	| service | functional-853000 service list                                                                                       | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-853000 service                                                                                            | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-853000                                                                                                    | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-853000 service                                                                                            | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| start   | -p functional-853000                                                                                                 | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT |                     |
	|         | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh findmnt                                                                                        | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-853000                                                                                                 | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh -- ls                                                                                          | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh cat                                                                                            | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | /mount-9p/test-1727692866526806000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh stat                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh stat                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh sudo                                                                                           | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT | 30 Sep 24 03:41 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh findmnt                                                                                        | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-853000                                                                                                 | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2631883095/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-853000 ssh findmnt                                                                                        | functional-853000 | jenkins | v1.34.0 | 30 Sep 24 03:41 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 03:41:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 03:41:06.433102    3269 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:41:06.433211    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:06.433214    3269 out.go:358] Setting ErrFile to fd 2...
	I0930 03:41:06.433216    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:06.433342    3269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:41:06.434894    3269 out.go:352] Setting JSON to false
	I0930 03:41:06.454212    3269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2429,"bootTime":1727690437,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:41:06.454318    3269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:41:06.457534    3269 out.go:177] * [functional-853000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0930 03:41:06.465502    3269 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:41:06.465581    3269 notify.go:220] Checking for updates...
	I0930 03:41:06.474472    3269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:41:06.478512    3269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:41:06.481553    3269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:41:06.484522    3269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:41:06.487517    3269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:41:06.490839    3269 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:41:06.491086    3269 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:41:06.495443    3269 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0930 03:41:06.502534    3269 start.go:297] selected driver: qemu2
	I0930 03:41:06.502540    3269 start.go:901] validating driver "qemu2" against &{Name:functional-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:41:06.502607    3269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:41:06.508551    3269 out.go:201] 
	W0930 03:41:06.511508    3269 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 03:41:06.514876    3269 out.go:201] 
	
	
	==> Docker <==
	Sep 30 10:41:08 functional-853000 dockerd[5742]: time="2024-09-30T10:41:08.464709861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:08 functional-853000 dockerd[5742]: time="2024-09-30T10:41:08.464791901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:08 functional-853000 cri-dockerd[5995]: time="2024-09-30T10:41:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2d4b4f259f925c3fbeeb558f7052cf7b9e95c3412e9364f487bd4e3ca62a9f6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 30 10:41:14 functional-853000 cri-dockerd[5995]: time="2024-09-30T10:41:14Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.513457785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.513488993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.513499618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.513529700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:14 functional-853000 dockerd[5735]: time="2024-09-30T10:41:14.548551628Z" level=info msg="ignoring event" container=5030f047e4435c182eb1a042fee794b1bfee34f537eb496970bf13d76975f788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.548666834Z" level=info msg="shim disconnected" id=5030f047e4435c182eb1a042fee794b1bfee34f537eb496970bf13d76975f788 namespace=moby
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.548700458Z" level=warning msg="cleaning up after shim disconnected" id=5030f047e4435c182eb1a042fee794b1bfee34f537eb496970bf13d76975f788 namespace=moby
	Sep 30 10:41:14 functional-853000 dockerd[5742]: time="2024-09-30T10:41:14.548704500Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:41:15 functional-853000 dockerd[5735]: time="2024-09-30T10:41:15.803278337Z" level=info msg="ignoring event" container=e2d4b4f259f925c3fbeeb558f7052cf7b9e95c3412e9364f487bd4e3ca62a9f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:41:15 functional-853000 dockerd[5742]: time="2024-09-30T10:41:15.803364419Z" level=info msg="shim disconnected" id=e2d4b4f259f925c3fbeeb558f7052cf7b9e95c3412e9364f487bd4e3ca62a9f6 namespace=moby
	Sep 30 10:41:15 functional-853000 dockerd[5742]: time="2024-09-30T10:41:15.803396668Z" level=warning msg="cleaning up after shim disconnected" id=e2d4b4f259f925c3fbeeb558f7052cf7b9e95c3412e9364f487bd4e3ca62a9f6 namespace=moby
	Sep 30 10:41:15 functional-853000 dockerd[5742]: time="2024-09-30T10:41:15.803450625Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 30 10:41:15 functional-853000 dockerd[5742]: time="2024-09-30T10:41:15.808489638Z" level=warning msg="cleanup warnings time=\"2024-09-30T10:41:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.357245496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.362280671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.362314462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.362365461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 30 10:41:16 functional-853000 dockerd[5735]: time="2024-09-30T10:41:16.406852761Z" level=info msg="ignoring event" container=243fc3a82e431d9d45780a1e302f431fe2039f957080d81eefe0e3106b32fe28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.406986174Z" level=info msg="shim disconnected" id=243fc3a82e431d9d45780a1e302f431fe2039f957080d81eefe0e3106b32fe28 namespace=moby
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.407020257Z" level=warning msg="cleaning up after shim disconnected" id=243fc3a82e431d9d45780a1e302f431fe2039f957080d81eefe0e3106b32fe28 namespace=moby
	Sep 30 10:41:16 functional-853000 dockerd[5742]: time="2024-09-30T10:41:16.407024798Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	243fc3a82e431       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            2                   28bdd1fa64009       hello-node-64b4f8f9ff-6b7ln
	5030f047e4435       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 seconds ago        Exited              mount-munger              0                   e2d4b4f259f92       busybox-mount
	8a4438e280bf2       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   03babc160fc43       hello-node-connect-65d86f57f4-h8jms
	4bb6b5c1c89a3       nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb                         24 seconds ago       Running             myfrontend                0                   42325a97652b8       sp-pod
	825cb298f4544       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         42 seconds ago       Running             nginx                     0                   68694c5523dab       nginx-svc
	564c8ae5a2cfe       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   d7b31b6ef7894       storage-provisioner
	02345cb984a6a       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   d69d14ad62a20       coredns-7c65d6cfc9-4zdzb
	27db576c200c9       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   16fc7b0faad00       kube-proxy-4qpwb
	d28650452d093       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   d7b31b6ef7894       storage-provisioner
	4bb467dcbd325       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   c15a5c81ea32b       kube-scheduler-functional-853000
	bec1bed02aeda       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   54bd121ed0522       kube-controller-manager-functional-853000
	3733d378882a1       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   5b7489d88b3ab       etcd-functional-853000
	b95fc92e59928       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   3a13cf216af53       kube-apiserver-functional-853000
	688ba874c906e       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   551ac841f99f6       coredns-7c65d6cfc9-4zdzb
	1fab90a485d06       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   776bb4b16e088       kube-proxy-4qpwb
	fbae678341037       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   2086e3773a4a6       kube-controller-manager-functional-853000
	64f5c752a43c4       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   af69a4266867a       kube-scheduler-functional-853000
	6a96243e776f4       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   b7e1679a9541e       etcd-functional-853000
	
	
	==> coredns [02345cb984a6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1285658273]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 10:39:39.819) (total time: 30003ms):
	Trace[1285658273]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:40:09.820)
	Trace[1285658273]: [30.003599003s] [30.003599003s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1436306590]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 10:39:39.819) (total time: 30003ms):
	Trace[1436306590]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (10:40:09.823)
	Trace[1436306590]: [30.003757045s] [30.003757045s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[54088933]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 10:39:39.819) (total time: 30004ms):
	Trace[54088933]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (10:40:09.823)
	Trace[54088933]: [30.004174117s] [30.004174117s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:21452 - 16234 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000097498s
	[INFO] 10.244.0.1:10868 - 24852 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000093331s
	[INFO] 10.244.0.1:56835 - 40448 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036624s
	[INFO] 10.244.0.1:30043 - 11979 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001085309s
	[INFO] 10.244.0.1:9902 - 50183 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000094289s
	[INFO] 10.244.0.1:56498 - 35713 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000127913s
	
	
	==> coredns [688ba874c906] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48374 - 48993 "HINFO IN 1939323519681135691.226061513637939359. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010113027s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-853000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-853000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=functional-853000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T03_38_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:38:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-853000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:41:11 +0000   Mon, 30 Sep 2024 10:38:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:41:11 +0000   Mon, 30 Sep 2024 10:38:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:41:11 +0000   Mon, 30 Sep 2024 10:38:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:41:11 +0000   Mon, 30 Sep 2024 10:38:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-853000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4d9fdd4f09d40328f6a7bad7300540c
	  System UUID:                b4d9fdd4f09d40328f6a7bad7300540c
	  Boot ID:                    cc2dff85-4f9c-4e2e-8770-31f40e7b1c70
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-6b7ln                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  default                     hello-node-connect-65d86f57f4-h8jms          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 coredns-7c65d6cfc9-4zdzb                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m2s
	  kube-system                 etcd-functional-853000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m8s
	  kube-system                 kube-apiserver-functional-853000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-functional-853000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-proxy-4qpwb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-scheduler-functional-853000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 2m32s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)    kubelet          Node functional-853000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)    kubelet          Node functional-853000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)    kubelet          Node functional-853000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m8s                   kubelet          Starting kubelet.
	  Normal  NodeReady                3m4s                   kubelet          Node functional-853000 status is now: NodeReady
	  Normal  RegisteredNode           3m3s                   node-controller  Node functional-853000 event: Registered Node functional-853000 in Controller
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node functional-853000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node functional-853000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node functional-853000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m30s                  node-controller  Node functional-853000 event: Registered Node functional-853000 in Controller
	  Normal  Starting                 101s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)    kubelet          Node functional-853000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)    kubelet          Node functional-853000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)    kubelet          Node functional-853000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                    node-controller  Node functional-853000 event: Registered Node functional-853000 in Controller
	
	
	==> dmesg <==
	[Sep30 10:39] systemd-fstab-generator[4790]: Ignoring "noauto" option for root device
	[  +0.054569] kauditd_printk_skb: 33 callbacks suppressed
	[ +20.870021] systemd-fstab-generator[5250]: Ignoring "noauto" option for root device
	[  +0.051318] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.112701] systemd-fstab-generator[5287]: Ignoring "noauto" option for root device
	[  +0.104497] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.125654] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +5.122083] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.342762] systemd-fstab-generator[5948]: Ignoring "noauto" option for root device
	[  +0.094444] systemd-fstab-generator[5960]: Ignoring "noauto" option for root device
	[  +0.089537] systemd-fstab-generator[5972]: Ignoring "noauto" option for root device
	[  +0.106453] systemd-fstab-generator[5987]: Ignoring "noauto" option for root device
	[  +0.217217] systemd-fstab-generator[6156]: Ignoring "noauto" option for root device
	[  +0.957809] systemd-fstab-generator[6276]: Ignoring "noauto" option for root device
	[  +1.262752] kauditd_printk_skb: 189 callbacks suppressed
	[  +5.189588] kauditd_printk_skb: 44 callbacks suppressed
	[Sep30 10:40] systemd-fstab-generator[7429]: Ignoring "noauto" option for root device
	[  +5.084421] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.366752] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.050429] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.364189] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.209923] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.390457] kauditd_printk_skb: 29 callbacks suppressed
	[Sep30 10:41] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.124224] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [3733d378882a] <==
	{"level":"info","ts":"2024-09-30T10:39:37.303090Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-30T10:39:37.303149Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:39:37.303179Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:39:37.304484Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:39:37.305070Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T10:39:37.305141Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-30T10:39:37.305162Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-30T10:39:37.305805Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T10:39:37.305832Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T10:39:38.388091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-30T10:39:38.388165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-30T10:39:38.388195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-30T10:39:38.388211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-30T10:39:38.388218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-30T10:39:38.388231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-30T10:39:38.388244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-30T10:39:38.389724Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-853000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:39:38.389760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:39:38.389964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:39:38.389956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:39:38.390212Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:39:38.390785Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:39:38.390987Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:39:38.391812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:39:38.392071Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [6a96243e776f] <==
	{"level":"info","ts":"2024-09-30T10:38:43.734573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T10:38:43.734594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-30T10:38:43.734624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T10:38:43.734639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-30T10:38:43.734676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T10:38:43.734715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-30T10:38:43.735725Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-853000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:38:43.735807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:38:43.736055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:38:43.736886Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:38:43.737860Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:38:43.738670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:38:43.739332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-30T10:38:43.747566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:38:43.747614Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:39:22.372210Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T10:39:22.372236Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-853000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-30T10:39:22.372279Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T10:39:22.372320Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T10:39:22.379489Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T10:39:22.379513Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T10:39:22.379542Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-30T10:39:22.380843Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-30T10:39:22.380877Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-30T10:39:22.380881Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-853000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 10:41:18 up 3 min,  0 users,  load average: 0.71, 0.43, 0.18
	Linux functional-853000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b95fc92e5992] <==
	I0930 10:39:38.976589       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 10:39:38.976604       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 10:39:38.976649       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 10:39:38.976977       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 10:39:38.976988       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 10:39:38.977068       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 10:39:38.977129       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 10:39:38.977634       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 10:39:38.980590       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 10:39:38.983252       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 10:39:38.985765       1 cache.go:39] Caches are synced for autoregister controller
	I0930 10:39:39.872253       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 10:39:39.981436       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0930 10:39:39.981984       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 10:39:39.983540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 10:39:40.533009       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 10:39:40.536999       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 10:39:40.546804       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 10:39:40.553777       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 10:39:40.557543       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 10:40:25.187525       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.4.204"}
	I0930 10:40:32.553537       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.91.216"}
	I0930 10:40:42.923633       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0930 10:40:42.967758       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.194.8"}
	I0930 10:40:59.568549       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.118.11"}
	
	
	==> kube-controller-manager [bec1bed02aed] <==
	I0930 10:39:42.458656       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:39:42.458976       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:39:42.464122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="249.308262ms"
	I0930 10:39:42.464444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.623µs"
	I0930 10:39:42.864094       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 10:39:42.956230       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 10:39:42.956277       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 10:40:19.387277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.851765ms"
	I0930 10:40:19.387608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="87.789µs"
	I0930 10:40:40.609351       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-853000"
	I0930 10:40:42.935035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="10.08365ms"
	I0930 10:40:42.946974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="11.913485ms"
	I0930 10:40:42.947004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="12µs"
	I0930 10:40:50.239633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="20.041µs"
	I0930 10:40:51.268084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="32.291µs"
	I0930 10:40:52.313145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="31.833µs"
	I0930 10:40:59.537052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="11.28174ms"
	I0930 10:40:59.540085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="3.007097ms"
	I0930 10:40:59.540116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="12.25µs"
	I0930 10:41:00.460078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.582µs"
	I0930 10:41:01.476938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="156.496µs"
	I0930 10:41:07.564699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.166µs"
	I0930 10:41:11.165058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-853000"
	I0930 10:41:16.332896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="33.916µs"
	I0930 10:41:16.713429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="34.333µs"
	
	
	==> kube-controller-manager [fbae67834103] <==
	I0930 10:38:47.553677       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0930 10:38:47.553751       1 shared_informer.go:320] Caches are synced for GC
	I0930 10:38:47.553759       1 shared_informer.go:320] Caches are synced for crt configmap
	I0930 10:38:47.560061       1 shared_informer.go:320] Caches are synced for cronjob
	I0930 10:38:47.560232       1 shared_informer.go:320] Caches are synced for namespace
	I0930 10:38:47.585276       1 shared_informer.go:320] Caches are synced for ephemeral
	I0930 10:38:47.585334       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0930 10:38:47.586462       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0930 10:38:47.586485       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0930 10:38:47.636256       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0930 10:38:47.637444       1 shared_informer.go:320] Caches are synced for disruption
	I0930 10:38:47.645574       1 shared_informer.go:320] Caches are synced for stateful set
	I0930 10:38:47.647717       1 shared_informer.go:320] Caches are synced for daemon sets
	I0930 10:38:47.739615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="193.049257ms"
	I0930 10:38:47.739777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.329µs"
	I0930 10:38:47.740847       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0930 10:38:47.742049       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:38:47.785641       1 shared_informer.go:320] Caches are synced for endpoint
	I0930 10:38:47.788340       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:38:48.104678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.517764ms"
	I0930 10:38:48.105702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.331µs"
	I0930 10:38:48.202355       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 10:38:48.235652       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 10:38:48.235668       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 10:39:15.129528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-853000"
	
	
	==> kube-proxy [1fab90a485d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 10:38:45.568906       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 10:38:45.578489       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0930 10:38:45.578591       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:38:45.587848       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 10:38:45.587870       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 10:38:45.587884       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:38:45.588538       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:38:45.588634       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:38:45.588640       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:38:45.589370       1 config.go:199] "Starting service config controller"
	I0930 10:38:45.589385       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:38:45.589398       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:38:45.589401       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:38:45.590669       1 config.go:328] "Starting node config controller"
	I0930 10:38:45.590674       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:38:45.689742       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:38:45.689748       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:38:45.690694       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [27db576c200c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 10:39:39.846778       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 10:39:39.850079       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0930 10:39:39.850105       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:39:39.857662       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 10:39:39.857678       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 10:39:39.857689       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:39:39.858360       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:39:39.858479       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:39:39.858487       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:39:39.858966       1 config.go:199] "Starting service config controller"
	I0930 10:39:39.858980       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:39:39.858989       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:39:39.858991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:39:39.859290       1 config.go:328] "Starting node config controller"
	I0930 10:39:39.859430       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:39:39.964768       1 shared_informer.go:320] Caches are synced for node config
	I0930 10:39:39.964768       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:39:39.964796       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4bb467dcbd32] <==
	I0930 10:39:37.769574       1 serving.go:386] Generated self-signed cert in-memory
	W0930 10:39:38.882904       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 10:39:38.882957       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 10:39:38.882979       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 10:39:38.883007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 10:39:38.905217       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 10:39:38.905232       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:39:38.906202       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 10:39:38.906279       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 10:39:38.906293       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:39:38.906329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 10:39:39.008311       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [64f5c752a43c] <==
	I0930 10:38:43.471563       1 serving.go:386] Generated self-signed cert in-memory
	W0930 10:38:44.239904       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 10:38:44.240007       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 10:38:44.240047       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 10:38:44.240071       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 10:38:44.245901       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 10:38:44.245915       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:38:44.246862       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 10:38:44.246909       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 10:38:44.246920       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:38:44.246926       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 10:38:44.347993       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:39:22.364559       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0930 10:39:22.364588       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0930 10:39:22.364665       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 30 10:40:52 functional-853000 kubelet[6283]: I0930 10:40:52.553995    6283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf\" (UniqueName: \"kubernetes.io/host-path/248f6bb8-5947-423c-8daf-51ff8be514d9-pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf\") pod \"sp-pod\" (UID: \"248f6bb8-5947-423c-8daf-51ff8be514d9\") " pod="default/sp-pod"
	Sep 30 10:40:52 functional-853000 kubelet[6283]: I0930 10:40:52.554123    6283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh78l\" (UniqueName: \"kubernetes.io/projected/248f6bb8-5947-423c-8daf-51ff8be514d9-kube-api-access-jh78l\") pod \"sp-pod\" (UID: \"248f6bb8-5947-423c-8daf-51ff8be514d9\") " pod="default/sp-pod"
	Sep 30 10:40:59 functional-853000 kubelet[6283]: I0930 10:40:59.531066    6283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=6.815864987 podStartE2EDuration="7.531052526s" podCreationTimestamp="2024-09-30 10:40:52 +0000 UTC" firstStartedPulling="2024-09-30 10:40:52.786762107 +0000 UTC m=+76.525062938" lastFinishedPulling="2024-09-30 10:40:53.501949604 +0000 UTC m=+77.240250477" observedRunningTime="2024-09-30 10:40:54.374169663 +0000 UTC m=+78.112470577" watchObservedRunningTime="2024-09-30 10:40:59.531052526 +0000 UTC m=+83.269353357"
	Sep 30 10:40:59 functional-853000 kubelet[6283]: I0930 10:40:59.631810    6283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx7tm\" (UniqueName: \"kubernetes.io/projected/ae4198e9-671d-44d8-b7a9-61c6eb8d80d5-kube-api-access-jx7tm\") pod \"hello-node-64b4f8f9ff-6b7ln\" (UID: \"ae4198e9-671d-44d8-b7a9-61c6eb8d80d5\") " pod="default/hello-node-64b4f8f9ff-6b7ln"
	Sep 30 10:41:00 functional-853000 kubelet[6283]: I0930 10:41:00.450231    6283 scope.go:117] "RemoveContainer" containerID="29a0c50b5bca783e6c90d6ea5a82a90a37c3ef1d5f2d71f103b8955cee369913"
	Sep 30 10:41:01 functional-853000 kubelet[6283]: I0930 10:41:01.467735    6283 scope.go:117] "RemoveContainer" containerID="29a0c50b5bca783e6c90d6ea5a82a90a37c3ef1d5f2d71f103b8955cee369913"
	Sep 30 10:41:01 functional-853000 kubelet[6283]: I0930 10:41:01.468037    6283 scope.go:117] "RemoveContainer" containerID="2aa1de51f43f0aa05fca73eaa1d31b222d83e6ce93b968d49857e514959115ae"
	Sep 30 10:41:01 functional-853000 kubelet[6283]: E0930 10:41:01.468190    6283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-6b7ln_default(ae4198e9-671d-44d8-b7a9-61c6eb8d80d5)\"" pod="default/hello-node-64b4f8f9ff-6b7ln" podUID="ae4198e9-671d-44d8-b7a9-61c6eb8d80d5"
	Sep 30 10:41:07 functional-853000 kubelet[6283]: I0930 10:41:07.328535    6283 scope.go:117] "RemoveContainer" containerID="e8170ed9e89400deb77f2f9e16ad9aa5557ca685c89e52feff6ec64f876920e8"
	Sep 30 10:41:07 functional-853000 kubelet[6283]: I0930 10:41:07.559448    6283 scope.go:117] "RemoveContainer" containerID="e8170ed9e89400deb77f2f9e16ad9aa5557ca685c89e52feff6ec64f876920e8"
	Sep 30 10:41:07 functional-853000 kubelet[6283]: I0930 10:41:07.559603    6283 scope.go:117] "RemoveContainer" containerID="8a4438e280bf2ee872d3961af0c3c22546c58408bafad96b1408aa88a1eac083"
	Sep 30 10:41:07 functional-853000 kubelet[6283]: E0930 10:41:07.559672    6283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h8jms_default(1d30b919-afce-4e3a-b478-3dfe2404ae59)\"" pod="default/hello-node-connect-65d86f57f4-h8jms" podUID="1d30b919-afce-4e3a-b478-3dfe2404ae59"
	Sep 30 10:41:08 functional-853000 kubelet[6283]: I0930 10:41:08.109159    6283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/19cd2428-d35f-43aa-87c7-938efd6e59db-test-volume\") pod \"busybox-mount\" (UID: \"19cd2428-d35f-43aa-87c7-938efd6e59db\") " pod="default/busybox-mount"
	Sep 30 10:41:08 functional-853000 kubelet[6283]: I0930 10:41:08.109179    6283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlwg\" (UniqueName: \"kubernetes.io/projected/19cd2428-d35f-43aa-87c7-938efd6e59db-kube-api-access-thlwg\") pod \"busybox-mount\" (UID: \"19cd2428-d35f-43aa-87c7-938efd6e59db\") " pod="default/busybox-mount"
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.887070    6283 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/19cd2428-d35f-43aa-87c7-938efd6e59db-test-volume\") pod \"19cd2428-d35f-43aa-87c7-938efd6e59db\" (UID: \"19cd2428-d35f-43aa-87c7-938efd6e59db\") "
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.887100    6283 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thlwg\" (UniqueName: \"kubernetes.io/projected/19cd2428-d35f-43aa-87c7-938efd6e59db-kube-api-access-thlwg\") pod \"19cd2428-d35f-43aa-87c7-938efd6e59db\" (UID: \"19cd2428-d35f-43aa-87c7-938efd6e59db\") "
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.887148    6283 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19cd2428-d35f-43aa-87c7-938efd6e59db-test-volume" (OuterVolumeSpecName: "test-volume") pod "19cd2428-d35f-43aa-87c7-938efd6e59db" (UID: "19cd2428-d35f-43aa-87c7-938efd6e59db"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.887174    6283 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/19cd2428-d35f-43aa-87c7-938efd6e59db-test-volume\") on node \"functional-853000\" DevicePath \"\""
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.890487    6283 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19cd2428-d35f-43aa-87c7-938efd6e59db-kube-api-access-thlwg" (OuterVolumeSpecName: "kube-api-access-thlwg") pod "19cd2428-d35f-43aa-87c7-938efd6e59db" (UID: "19cd2428-d35f-43aa-87c7-938efd6e59db"). InnerVolumeSpecName "kube-api-access-thlwg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:41:15 functional-853000 kubelet[6283]: I0930 10:41:15.988014    6283 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-thlwg\" (UniqueName: \"kubernetes.io/projected/19cd2428-d35f-43aa-87c7-938efd6e59db-kube-api-access-thlwg\") on node \"functional-853000\" DevicePath \"\""
	Sep 30 10:41:16 functional-853000 kubelet[6283]: I0930 10:41:16.327871    6283 scope.go:117] "RemoveContainer" containerID="2aa1de51f43f0aa05fca73eaa1d31b222d83e6ce93b968d49857e514959115ae"
	Sep 30 10:41:16 functional-853000 kubelet[6283]: I0930 10:41:16.705705    6283 scope.go:117] "RemoveContainer" containerID="2aa1de51f43f0aa05fca73eaa1d31b222d83e6ce93b968d49857e514959115ae"
	Sep 30 10:41:16 functional-853000 kubelet[6283]: I0930 10:41:16.705933    6283 scope.go:117] "RemoveContainer" containerID="243fc3a82e431d9d45780a1e302f431fe2039f957080d81eefe0e3106b32fe28"
	Sep 30 10:41:16 functional-853000 kubelet[6283]: E0930 10:41:16.706032    6283 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-6b7ln_default(ae4198e9-671d-44d8-b7a9-61c6eb8d80d5)\"" pod="default/hello-node-64b4f8f9ff-6b7ln" podUID="ae4198e9-671d-44d8-b7a9-61c6eb8d80d5"
	Sep 30 10:41:16 functional-853000 kubelet[6283]: I0930 10:41:16.717138    6283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d4b4f259f925c3fbeeb558f7052cf7b9e95c3412e9364f487bd4e3ca62a9f6"
	
	
	==> storage-provisioner [564c8ae5a2cf] <==
	I0930 10:39:54.380601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:39:54.383929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:39:54.383944       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:40:11.801441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:40:11.802058       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-853000_8ad0d485-9178-41ce-b89b-35b16c7a6ea5!
	I0930 10:40:11.803274       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"64be4ddd-6261-4387-8aa2-5949e3abc13c", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-853000_8ad0d485-9178-41ce-b89b-35b16c7a6ea5 became leader
	I0930 10:40:11.904093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-853000_8ad0d485-9178-41ce-b89b-35b16c7a6ea5!
	I0930 10:40:39.134450       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0930 10:40:39.134715       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0930 10:40:39.134486       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    c095b9f1-549e-41b6-8c4c-dec3a7fb6e39 321 0 2024-09-30 10:38:15 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-30 10:38:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf 691 0 2024-09-30 10:40:39 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-30 10:40:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-30 10:40:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0930 10:40:39.134971       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf" provisioned
	I0930 10:40:39.134998       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0930 10:40:39.135020       1 volume_store.go:212] Trying to save persistentvolume "pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf"
	I0930 10:40:39.139427       1 volume_store.go:219] persistentvolume "pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf" saved
	I0930 10:40:39.140020       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d3182f4a-8ac2-43fb-8e74-6b1462ba7bbf
	
	
	==> storage-provisioner [d28650452d09] <==
	I0930 10:39:39.809379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 10:39:39.810001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-853000 -n functional-853000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-853000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-853000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-853000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-853000/192.168.105.4
	Start Time:       Mon, 30 Sep 2024 03:41:08 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://5030f047e4435c182eb1a042fee794b1bfee34f537eb496970bf13d76975f788
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 30 Sep 2024 03:41:14 -0700
	      Finished:     Mon, 30 Sep 2024 03:41:14 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thlwg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-thlwg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-853000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.955s (5.955s including waiting). Image size: 3547125 bytes.
	  Normal  Created    4s    kubelet            Created container mount-munger
	  Normal  Started    4s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-937000 node stop m02 -v=7 --alsologtostderr: (12.189472958s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
E0930 03:46:54.377196    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr: (25.956822958s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 3 (25.976283208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 03:47:31.959326    3654 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0930 03:47:31.959339    3654 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.973165375s)
ha_test.go:413: expected profile "ha-937000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-937000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-937000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-937000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
E0930 03:48:16.299648    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 3 (25.957174541s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 03:48:23.889193    3664 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0930 03:48:23.889208    3664 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (87.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-937000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.082267125s)

                                                
                                                
-- stdout --
	* Starting "ha-937000-m02" control-plane node in "ha-937000" cluster
	* Restarting existing qemu2 VM for "ha-937000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-937000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:48:23.923697    3666 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:48:23.923960    3666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:48:23.923967    3666 out.go:358] Setting ErrFile to fd 2...
	I0930 03:48:23.923970    3666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:48:23.924132    3666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:48:23.924378    3666 mustload.go:65] Loading cluster: ha-937000
	I0930 03:48:23.924624    3666 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0930 03:48:23.924865    3666 host.go:58] "ha-937000-m02" host status: Stopped
	I0930 03:48:23.928533    3666 out.go:177] * Starting "ha-937000-m02" control-plane node in "ha-937000" cluster
	I0930 03:48:23.932387    3666 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:48:23.932399    3666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:48:23.932405    3666 cache.go:56] Caching tarball of preloaded images
	I0930 03:48:23.932471    3666 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:48:23.932478    3666 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:48:23.932534    3666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/ha-937000/config.json ...
	I0930 03:48:23.932890    3666 start.go:360] acquireMachinesLock for ha-937000-m02: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:48:23.932950    3666 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "ha-937000-m02"
	I0930 03:48:23.932957    3666 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:48:23.932961    3666 fix.go:54] fixHost starting: m02
	I0930 03:48:23.933051    3666 fix.go:112] recreateIfNeeded on ha-937000-m02: state=Stopped err=<nil>
	W0930 03:48:23.933057    3666 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:48:23.937175    3666 out.go:177] * Restarting existing qemu2 VM for "ha-937000-m02" ...
	I0930 03:48:23.941375    3666 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:48:23.941415    3666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:48:a5:02:6a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/disk.qcow2
	I0930 03:48:23.943660    3666 main.go:141] libmachine: STDOUT: 
	I0930 03:48:23.943676    3666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:48:23.943706    3666 fix.go:56] duration metric: took 10.744166ms for fixHost
	I0930 03:48:23.943712    3666 start.go:83] releasing machines lock for "ha-937000-m02", held for 10.756166ms
	W0930 03:48:23.943720    3666 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:48:23.943754    3666 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:48:23.943757    3666 start.go:729] Will try again in 5 seconds ...
	I0930 03:48:28.945048    3666 start.go:360] acquireMachinesLock for ha-937000-m02: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:48:28.945166    3666 start.go:364] duration metric: took 98.083µs to acquireMachinesLock for "ha-937000-m02"
	I0930 03:48:28.945199    3666 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:48:28.945203    3666 fix.go:54] fixHost starting: m02
	I0930 03:48:28.945370    3666 fix.go:112] recreateIfNeeded on ha-937000-m02: state=Stopped err=<nil>
	W0930 03:48:28.945377    3666 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:48:28.947839    3666 out.go:177] * Restarting existing qemu2 VM for "ha-937000-m02" ...
	I0930 03:48:28.951861    3666 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:48:28.951899    3666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:48:a5:02:6a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/disk.qcow2
	I0930 03:48:28.954186    3666 main.go:141] libmachine: STDOUT: 
	I0930 03:48:28.954210    3666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:48:28.954237    3666 fix.go:56] duration metric: took 9.033542ms for fixHost
	I0930 03:48:28.954240    3666 start.go:83] releasing machines lock for "ha-937000-m02", held for 9.0695ms
	W0930 03:48:28.954288    3666 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:48:28.957925    3666 out.go:201] 
	W0930 03:48:28.961913    3666 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:48:28.961918    3666 out.go:270] * 
	* 
	W0930 03:48:28.963678    3666 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:48:28.967943    3666 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0930 03:48:23.923697    3666 out.go:345] Setting OutFile to fd 1 ...
I0930 03:48:23.923960    3666 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:48:23.923967    3666 out.go:358] Setting ErrFile to fd 2...
I0930 03:48:23.923970    3666 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:48:23.924132    3666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:48:23.924378    3666 mustload.go:65] Loading cluster: ha-937000
I0930 03:48:23.924624    3666 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0930 03:48:23.924865    3666 host.go:58] "ha-937000-m02" host status: Stopped
I0930 03:48:23.928533    3666 out.go:177] * Starting "ha-937000-m02" control-plane node in "ha-937000" cluster
I0930 03:48:23.932387    3666 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 03:48:23.932399    3666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0930 03:48:23.932405    3666 cache.go:56] Caching tarball of preloaded images
I0930 03:48:23.932471    3666 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0930 03:48:23.932478    3666 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0930 03:48:23.932534    3666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/ha-937000/config.json ...
I0930 03:48:23.932890    3666 start.go:360] acquireMachinesLock for ha-937000-m02: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0930 03:48:23.932950    3666 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "ha-937000-m02"
I0930 03:48:23.932957    3666 start.go:96] Skipping create...Using existing machine configuration
I0930 03:48:23.932961    3666 fix.go:54] fixHost starting: m02
I0930 03:48:23.933051    3666 fix.go:112] recreateIfNeeded on ha-937000-m02: state=Stopped err=<nil>
W0930 03:48:23.933057    3666 fix.go:138] unexpected machine state, will restart: <nil>
I0930 03:48:23.937175    3666 out.go:177] * Restarting existing qemu2 VM for "ha-937000-m02" ...
I0930 03:48:23.941375    3666 qemu.go:418] Using hvf for hardware acceleration
I0930 03:48:23.941415    3666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:48:a5:02:6a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/disk.qcow2
I0930 03:48:23.943660    3666 main.go:141] libmachine: STDOUT: 
I0930 03:48:23.943676    3666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0930 03:48:23.943706    3666 fix.go:56] duration metric: took 10.744166ms for fixHost
I0930 03:48:23.943712    3666 start.go:83] releasing machines lock for "ha-937000-m02", held for 10.756166ms
W0930 03:48:23.943720    3666 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0930 03:48:23.943754    3666 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0930 03:48:23.943757    3666 start.go:729] Will try again in 5 seconds ...
I0930 03:48:28.945048    3666 start.go:360] acquireMachinesLock for ha-937000-m02: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0930 03:48:28.945166    3666 start.go:364] duration metric: took 98.083µs to acquireMachinesLock for "ha-937000-m02"
I0930 03:48:28.945199    3666 start.go:96] Skipping create...Using existing machine configuration
I0930 03:48:28.945203    3666 fix.go:54] fixHost starting: m02
I0930 03:48:28.945370    3666 fix.go:112] recreateIfNeeded on ha-937000-m02: state=Stopped err=<nil>
W0930 03:48:28.945377    3666 fix.go:138] unexpected machine state, will restart: <nil>
I0930 03:48:28.947839    3666 out.go:177] * Restarting existing qemu2 VM for "ha-937000-m02" ...
I0930 03:48:28.951861    3666 qemu.go:418] Using hvf for hardware acceleration
I0930 03:48:28.951899    3666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:48:a5:02:6a:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000-m02/disk.qcow2
I0930 03:48:28.954186    3666 main.go:141] libmachine: STDOUT: 
I0930 03:48:28.954210    3666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0930 03:48:28.954237    3666 fix.go:56] duration metric: took 9.033542ms for fixHost
I0930 03:48:28.954240    3666 start.go:83] releasing machines lock for "ha-937000-m02", held for 9.0695ms
W0930 03:48:28.954288    3666 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0930 03:48:28.957925    3666 out.go:201] 
W0930 03:48:28.961913    3666 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0930 03:48:28.961918    3666 out.go:270] * 
* 
W0930 03:48:28.963678    3666 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0930 03:48:28.967943    3666 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-937000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr: (25.959062708s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.038112541s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 3 (25.955668292s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 03:49:50.923840    3685 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0930 03:49:50.923850    3685 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (87.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-937000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-937000 -v=7 --alsologtostderr
E0930 03:50:18.266232    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:50:32.417624    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:51:00.140801    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-937000 -v=7 --alsologtostderr: (3m49.012375708s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-937000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-937000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.228783125s)

                                                
                                                
-- stdout --
	* [ha-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-937000" primary control-plane node in "ha-937000" cluster
	* Restarting existing qemu2 VM for "ha-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:54:06.255929    3738 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:54:06.256138    3738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:06.256143    3738 out.go:358] Setting ErrFile to fd 2...
	I0930 03:54:06.256146    3738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:06.256321    3738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:54:06.257666    3738 out.go:352] Setting JSON to false
	I0930 03:54:06.277535    3738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3209,"bootTime":1727690437,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:54:06.277609    3738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:54:06.280352    3738 out.go:177] * [ha-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:54:06.287517    3738 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:54:06.287578    3738 notify.go:220] Checking for updates...
	I0930 03:54:06.295457    3738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:54:06.299403    3738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:54:06.303420    3738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:54:06.306483    3738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:54:06.309375    3738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:54:06.312710    3738 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:54:06.312765    3738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:54:06.317481    3738 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 03:54:06.324466    3738 start.go:297] selected driver: qemu2
	I0930 03:54:06.324473    3738 start.go:901] validating driver "qemu2" against &{Name:ha-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-937000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:54:06.324563    3738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:54:06.327214    3738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 03:54:06.327240    3738 cni.go:84] Creating CNI manager for ""
	I0930 03:54:06.327264    3738 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 03:54:06.327324    3738 start.go:340] cluster config:
	{Name:ha-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-937000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:54:06.331353    3738 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:54:06.339450    3738 out.go:177] * Starting "ha-937000" primary control-plane node in "ha-937000" cluster
	I0930 03:54:06.343421    3738 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:54:06.343441    3738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:54:06.343455    3738 cache.go:56] Caching tarball of preloaded images
	I0930 03:54:06.343543    3738 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:54:06.343549    3738 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:54:06.343641    3738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/ha-937000/config.json ...
	I0930 03:54:06.344155    3738 start.go:360] acquireMachinesLock for ha-937000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:54:06.344194    3738 start.go:364] duration metric: took 31.958µs to acquireMachinesLock for "ha-937000"
	I0930 03:54:06.344204    3738 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:54:06.344207    3738 fix.go:54] fixHost starting: 
	I0930 03:54:06.344338    3738 fix.go:112] recreateIfNeeded on ha-937000: state=Stopped err=<nil>
	W0930 03:54:06.344346    3738 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:54:06.347431    3738 out.go:177] * Restarting existing qemu2 VM for "ha-937000" ...
	I0930 03:54:06.354340    3738 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:54:06.354380    3738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:98:42:81:47:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/disk.qcow2
	I0930 03:54:06.356518    3738 main.go:141] libmachine: STDOUT: 
	I0930 03:54:06.356547    3738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:54:06.356584    3738 fix.go:56] duration metric: took 12.374916ms for fixHost
	I0930 03:54:06.356590    3738 start.go:83] releasing machines lock for "ha-937000", held for 12.390875ms
	W0930 03:54:06.356598    3738 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:54:06.356644    3738 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:54:06.356649    3738 start.go:729] Will try again in 5 seconds ...
	I0930 03:54:11.358742    3738 start.go:360] acquireMachinesLock for ha-937000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:54:11.359141    3738 start.go:364] duration metric: took 305.584µs to acquireMachinesLock for "ha-937000"
	I0930 03:54:11.359272    3738 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:54:11.359289    3738 fix.go:54] fixHost starting: 
	I0930 03:54:11.359924    3738 fix.go:112] recreateIfNeeded on ha-937000: state=Stopped err=<nil>
	W0930 03:54:11.359951    3738 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:54:11.363475    3738 out.go:177] * Restarting existing qemu2 VM for "ha-937000" ...
	I0930 03:54:11.371272    3738 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:54:11.371471    3738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:98:42:81:47:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/disk.qcow2
	I0930 03:54:11.380630    3738 main.go:141] libmachine: STDOUT: 
	I0930 03:54:11.380687    3738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:54:11.380740    3738 fix.go:56] duration metric: took 21.452625ms for fixHost
	I0930 03:54:11.380760    3738 start.go:83] releasing machines lock for "ha-937000", held for 21.59725ms
	W0930 03:54:11.380929    3738 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:54:11.389301    3738 out.go:201] 
	W0930 03:54:11.393438    3738 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:54:11.393492    3738 out.go:270] * 
	* 
	W0930 03:54:11.395922    3738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:54:11.404309    3738 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-937000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-937000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (32.748792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-937000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.563875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-937000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-937000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:54:11.551390    3750 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:54:11.551849    3750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:11.551854    3750 out.go:358] Setting ErrFile to fd 2...
	I0930 03:54:11.551857    3750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:11.552040    3750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:54:11.552379    3750 mustload.go:65] Loading cluster: ha-937000
	I0930 03:54:11.552730    3750 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0930 03:54:11.553059    3750 out.go:270] ! The control-plane node ha-937000 host is not running (will try others): state=Stopped
	! The control-plane node ha-937000 host is not running (will try others): state=Stopped
	W0930 03:54:11.553172    3750 out.go:270] ! The control-plane node ha-937000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-937000-m02 host is not running (will try others): state=Stopped
	I0930 03:54:11.558013    3750 out.go:177] * The control-plane node ha-937000-m03 host is not running: state=Stopped
	I0930 03:54:11.561049    3750 out.go:177]   To start a cluster, run: "minikube start -p ha-937000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-937000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr: exit status 7 (31.734834ms)

                                                
                                                
-- stdout --
	ha-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:54:11.593833    3752 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:54:11.594164    3752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:11.594169    3752 out.go:358] Setting ErrFile to fd 2...
	I0930 03:54:11.594172    3752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:54:11.594374    3752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:54:11.594528    3752 out.go:352] Setting JSON to false
	I0930 03:54:11.594540    3752 mustload.go:65] Loading cluster: ha-937000
	I0930 03:54:11.594659    3752 notify.go:220] Checking for updates...
	I0930 03:54:11.595017    3752 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:54:11.595029    3752 status.go:174] checking status of ha-937000 ...
	I0930 03:54:11.595267    3752 status.go:364] ha-937000 host status = "Stopped" (err=<nil>)
	I0930 03:54:11.595271    3752 status.go:377] host is not running, skipping remaining checks
	I0930 03:54:11.595274    3752 status.go:176] ha-937000 status: &{Name:ha-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:54:11.595284    3752 status.go:174] checking status of ha-937000-m02 ...
	I0930 03:54:11.595377    3752 status.go:364] ha-937000-m02 host status = "Stopped" (err=<nil>)
	I0930 03:54:11.595379    3752 status.go:377] host is not running, skipping remaining checks
	I0930 03:54:11.595381    3752 status.go:176] ha-937000-m02 status: &{Name:ha-937000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:54:11.595385    3752 status.go:174] checking status of ha-937000-m03 ...
	I0930 03:54:11.595472    3752 status.go:364] ha-937000-m03 host status = "Stopped" (err=<nil>)
	I0930 03:54:11.595474    3752 status.go:377] host is not running, skipping remaining checks
	I0930 03:54:11.595476    3752 status.go:176] ha-937000-m03 status: &{Name:ha-937000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:54:11.595479    3752 status.go:174] checking status of ha-937000-m04 ...
	I0930 03:54:11.595576    3752 status.go:364] ha-937000-m04 host status = "Stopped" (err=<nil>)
	I0930 03:54:11.595579    3752 status.go:377] host is not running, skipping remaining checks
	I0930 03:54:11.595581    3752 status.go:176] ha-937000-m04 status: &{Name:ha-937000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (30.9085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-937000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-937000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-937000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-937000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (31.037708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 stop -v=7 --alsologtostderr
E0930 03:55:18.261778    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:55:32.413732    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:56:41.353839    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-937000 stop -v=7 --alsologtostderr: (3m21.97640525s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr: exit status 7 (64.31825ms)

                                                
                                                
-- stdout --
	ha-937000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:57:33.741794    3798 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:57:33.741973    3798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:33.741978    3798 out.go:358] Setting ErrFile to fd 2...
	I0930 03:57:33.741981    3798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:33.742146    3798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:57:33.742294    3798 out.go:352] Setting JSON to false
	I0930 03:57:33.742308    3798 mustload.go:65] Loading cluster: ha-937000
	I0930 03:57:33.742346    3798 notify.go:220] Checking for updates...
	I0930 03:57:33.742607    3798 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:57:33.742621    3798 status.go:174] checking status of ha-937000 ...
	I0930 03:57:33.742916    3798 status.go:364] ha-937000 host status = "Stopped" (err=<nil>)
	I0930 03:57:33.742920    3798 status.go:377] host is not running, skipping remaining checks
	I0930 03:57:33.742923    3798 status.go:176] ha-937000 status: &{Name:ha-937000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:57:33.742935    3798 status.go:174] checking status of ha-937000-m02 ...
	I0930 03:57:33.743058    3798 status.go:364] ha-937000-m02 host status = "Stopped" (err=<nil>)
	I0930 03:57:33.743063    3798 status.go:377] host is not running, skipping remaining checks
	I0930 03:57:33.743065    3798 status.go:176] ha-937000-m02 status: &{Name:ha-937000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:57:33.743069    3798 status.go:174] checking status of ha-937000-m03 ...
	I0930 03:57:33.743206    3798 status.go:364] ha-937000-m03 host status = "Stopped" (err=<nil>)
	I0930 03:57:33.743209    3798 status.go:377] host is not running, skipping remaining checks
	I0930 03:57:33.743211    3798 status.go:176] ha-937000-m03 status: &{Name:ha-937000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 03:57:33.743216    3798 status.go:174] checking status of ha-937000-m04 ...
	I0930 03:57:33.743329    3798 status.go:364] ha-937000-m04 host status = "Stopped" (err=<nil>)
	I0930 03:57:33.743332    3798 status.go:377] host is not running, skipping remaining checks
	I0930 03:57:33.743334    3798 status.go:176] ha-937000-m04 status: &{Name:ha-937000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": ha-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": ha-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr": ha-937000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-937000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (33.071584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-937000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-937000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.184497084s)

                                                
                                                
-- stdout --
	* [ha-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-937000" primary control-plane node in "ha-937000" cluster
	* Restarting existing qemu2 VM for "ha-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-937000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:57:33.806127    3802 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:57:33.806254    3802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:33.806257    3802 out.go:358] Setting ErrFile to fd 2...
	I0930 03:57:33.806260    3802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:33.806388    3802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:57:33.807389    3802 out.go:352] Setting JSON to false
	I0930 03:57:33.823664    3802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3416,"bootTime":1727690437,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:57:33.823723    3802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:57:33.829234    3802 out.go:177] * [ha-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:57:33.837257    3802 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:57:33.837304    3802 notify.go:220] Checking for updates...
	I0930 03:57:33.844198    3802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:57:33.848254    3802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:57:33.851292    3802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:57:33.854112    3802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:57:33.857286    3802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:57:33.860629    3802 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:57:33.860889    3802 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:57:33.864236    3802 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 03:57:33.871243    3802 start.go:297] selected driver: qemu2
	I0930 03:57:33.871249    3802 start.go:901] validating driver "qemu2" against &{Name:ha-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-937000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:57:33.871329    3802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:57:33.873491    3802 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 03:57:33.873586    3802 cni.go:84] Creating CNI manager for ""
	I0930 03:57:33.873613    3802 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 03:57:33.873667    3802 start.go:340] cluster config:
	{Name:ha-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-937000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:57:33.877288    3802 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:57:33.885237    3802 out.go:177] * Starting "ha-937000" primary control-plane node in "ha-937000" cluster
	I0930 03:57:33.889230    3802 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:57:33.889247    3802 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:57:33.889257    3802 cache.go:56] Caching tarball of preloaded images
	I0930 03:57:33.889322    3802 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:57:33.889329    3802 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:57:33.889411    3802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/ha-937000/config.json ...
	I0930 03:57:33.889924    3802 start.go:360] acquireMachinesLock for ha-937000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:57:33.889961    3802 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "ha-937000"
	I0930 03:57:33.889969    3802 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:57:33.889975    3802 fix.go:54] fixHost starting: 
	I0930 03:57:33.890104    3802 fix.go:112] recreateIfNeeded on ha-937000: state=Stopped err=<nil>
	W0930 03:57:33.890114    3802 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:57:33.895259    3802 out.go:177] * Restarting existing qemu2 VM for "ha-937000" ...
	I0930 03:57:33.903197    3802 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:57:33.903237    3802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:98:42:81:47:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/disk.qcow2
	I0930 03:57:33.905298    3802 main.go:141] libmachine: STDOUT: 
	I0930 03:57:33.905320    3802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:57:33.905356    3802 fix.go:56] duration metric: took 15.380833ms for fixHost
	I0930 03:57:33.905360    3802 start.go:83] releasing machines lock for "ha-937000", held for 15.395125ms
	W0930 03:57:33.905368    3802 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:57:33.905406    3802 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:57:33.905411    3802 start.go:729] Will try again in 5 seconds ...
	I0930 03:57:38.907510    3802 start.go:360] acquireMachinesLock for ha-937000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:57:38.907929    3802 start.go:364] duration metric: took 328.625µs to acquireMachinesLock for "ha-937000"
	I0930 03:57:38.908079    3802 start.go:96] Skipping create...Using existing machine configuration
	I0930 03:57:38.908099    3802 fix.go:54] fixHost starting: 
	I0930 03:57:38.908753    3802 fix.go:112] recreateIfNeeded on ha-937000: state=Stopped err=<nil>
	W0930 03:57:38.908779    3802 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 03:57:38.913294    3802 out.go:177] * Restarting existing qemu2 VM for "ha-937000" ...
	I0930 03:57:38.921115    3802 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:57:38.921324    3802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:98:42:81:47:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/ha-937000/disk.qcow2
	I0930 03:57:38.930470    3802 main.go:141] libmachine: STDOUT: 
	I0930 03:57:38.931050    3802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:57:38.931116    3802 fix.go:56] duration metric: took 23.015458ms for fixHost
	I0930 03:57:38.931132    3802 start.go:83] releasing machines lock for "ha-937000", held for 23.174458ms
	W0930 03:57:38.931303    3802 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-937000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:57:38.937112    3802 out.go:201] 
	W0930 03:57:38.940241    3802 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:57:38.940266    3802 out.go:270] * 
	* 
	W0930 03:57:38.942831    3802 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:57:38.954081    3802 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-937000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (67.39375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-937000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-937000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-937000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-937000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (31.000584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-937000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-937000 --control-plane -v=7 --alsologtostderr: exit status 83 (39.626ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-937000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-937000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:57:39.141429    3817 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:57:39.141586    3817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:39.141589    3817 out.go:358] Setting ErrFile to fd 2...
	I0930 03:57:39.141591    3817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:57:39.141729    3817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:57:39.141961    3817 mustload.go:65] Loading cluster: ha-937000
	I0930 03:57:39.142223    3817 config.go:182] Loaded profile config "ha-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0930 03:57:39.142507    3817 out.go:270] ! The control-plane node ha-937000 host is not running (will try others): state=Stopped
	! The control-plane node ha-937000 host is not running (will try others): state=Stopped
	W0930 03:57:39.142608    3817 out.go:270] ! The control-plane node ha-937000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-937000-m02 host is not running (will try others): state=Stopped
	I0930 03:57:39.144236    3817 out.go:177] * The control-plane node ha-937000-m03 host is not running: state=Stopped
	I0930 03:57:39.148251    3817 out.go:177]   To start a cluster, run: "minikube start -p ha-937000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-937000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-937000 -n ha-937000: exit status 7 (30.924208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-068000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-068000 --driver=qemu2 : exit status 80 (9.934928417s)

                                                
                                                
-- stdout --
	* [image-068000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-068000" primary control-plane node in "image-068000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-068000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-068000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-068000 -n image-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-068000 -n image-068000: exit status 7 (69.55475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-879000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-879000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.842308792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"77abbd6a-c023-4fb3-a8a5-8bf249dd7165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-879000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52087f79-8a5c-4a44-85b6-f0ca1502377a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"f62c0798-8b84-4600-b8b7-d971567e5e95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig"}}
	{"specversion":"1.0","id":"13f0fbc3-feb3-44ca-9ccd-8bb5d0b35470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c265c82d-d0cf-45c6-aea9-42413a538f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a20e42b-072a-4cff-a49a-3ee69240ac4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube"}}
	{"specversion":"1.0","id":"72f6f140-de5c-40b7-85eb-b133425c932d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"73b9c0ab-7bf8-4f18-b627-e0ac54d03fb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"da51bb12-febf-43ee-a91c-bf6c2dbe2f44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"db5ae216-ded3-408b-b4f5-917a04bafc28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-879000\" primary control-plane node in \"json-output-879000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"94405e6b-9300-4596-87be-097fe6cd1b21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"48085539-075d-4805-b0df-40b092b785a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-879000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"854b94a3-1700-4e6d-a662-9abb6ae12aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ba1ca787-dec6-41e8-97ea-114d384d4bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"49c8cd9b-67ad-4ccb-af24-553332faeea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-879000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"8846a145-c36a-45af-b441-5ddb4d9ac301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"b93f58ac-a1ed-41f7-a880-3b3459e0a805","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-879000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-879000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-879000 --output=json --user=testUser: exit status 83 (78.431542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8ef7e34-e656-4711-b796-369ef23c482b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-879000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8e83280e-7c62-4e93-9b9a-b3db9b427cdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-879000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-879000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-879000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-879000 --output=json --user=testUser: exit status 83 (47.083334ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-879000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-879000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-879000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-879000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-072000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-072000 --driver=qemu2 : exit status 80 (9.904026833s)

                                                
                                                
-- stdout --
	* [first-072000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-072000" primary control-plane node in "first-072000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-072000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-072000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-072000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-30 03:58:11.92856 -0700 PDT m=+2287.638116751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-073000 -n second-073000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-073000 -n second-073000: exit status 85 (83.755042ms)

                                                
                                                
-- stdout --
	* Profile "second-073000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-073000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-073000" host is not running, skipping log retrieval (state="* Profile \"second-073000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-073000\"")
helpers_test.go:175: Cleaning up "second-073000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-073000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-30 03:58:12.121351 -0700 PDT m=+2287.830910251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-072000 -n first-072000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-072000 -n first-072000: exit status 7 (30.717416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-072000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-072000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-072000
--- FAIL: TestMinikubeProfile (10.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-358000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-358000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.461644667s)

                                                
                                                
-- stdout --
	* [mount-start-1-358000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-358000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-358000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-358000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-358000 -n mount-start-1-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-358000 -n mount-start-1-358000: exit status 7 (72.198708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-711000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-711000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.76293075s)

                                                
                                                
-- stdout --
	* [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:58:22.985611    3951 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:58:22.985745    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:58:22.985749    3951 out.go:358] Setting ErrFile to fd 2...
	I0930 03:58:22.985751    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:58:22.985876    3951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:58:22.987017    3951 out.go:352] Setting JSON to false
	I0930 03:58:23.003255    3951 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3465,"bootTime":1727690437,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:58:23.003327    3951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:58:23.008875    3951 out.go:177] * [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:58:23.016880    3951 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:58:23.016935    3951 notify.go:220] Checking for updates...
	I0930 03:58:23.023831    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:58:23.026841    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:58:23.029843    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:58:23.032814    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:58:23.035851    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:58:23.038949    3951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:58:23.042818    3951 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 03:58:23.049832    3951 start.go:297] selected driver: qemu2
	I0930 03:58:23.049837    3951 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:58:23.049843    3951 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:58:23.052098    3951 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:58:23.055810    3951 out.go:177] * Automatically selected the socket_vmnet network
	I0930 03:58:23.058869    3951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 03:58:23.058885    3951 cni.go:84] Creating CNI manager for ""
	I0930 03:58:23.058902    3951 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 03:58:23.058906    3951 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 03:58:23.058938    3951 start.go:340] cluster config:
	{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:58:23.062675    3951 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:58:23.070850    3951 out.go:177] * Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	I0930 03:58:23.074628    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:58:23.074642    3951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:58:23.074650    3951 cache.go:56] Caching tarball of preloaded images
	I0930 03:58:23.074716    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 03:58:23.074723    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 03:58:23.074972    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/multinode-711000/config.json ...
	I0930 03:58:23.074983    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/multinode-711000/config.json: {Name:mk406026a3e1193ddbcbd135e990d7aa42b756fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:58:23.075222    3951 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:58:23.075259    3951 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "multinode-711000"
	I0930 03:58:23.075272    3951 start.go:93] Provisioning new machine with config: &{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 03:58:23.075308    3951 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 03:58:23.082703    3951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 03:58:23.101433    3951 start.go:159] libmachine.API.Create for "multinode-711000" (driver="qemu2")
	I0930 03:58:23.101468    3951 client.go:168] LocalClient.Create starting
	I0930 03:58:23.101548    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 03:58:23.101580    3951 main.go:141] libmachine: Decoding PEM data...
	I0930 03:58:23.101590    3951 main.go:141] libmachine: Parsing certificate...
	I0930 03:58:23.101635    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 03:58:23.101664    3951 main.go:141] libmachine: Decoding PEM data...
	I0930 03:58:23.101673    3951 main.go:141] libmachine: Parsing certificate...
	I0930 03:58:23.102068    3951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 03:58:23.264386    3951 main.go:141] libmachine: Creating SSH key...
	I0930 03:58:23.292812    3951 main.go:141] libmachine: Creating Disk image...
	I0930 03:58:23.292818    3951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 03:58:23.292981    3951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:23.302042    3951 main.go:141] libmachine: STDOUT: 
	I0930 03:58:23.302065    3951 main.go:141] libmachine: STDERR: 
	I0930 03:58:23.302121    3951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2 +20000M
	I0930 03:58:23.310393    3951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 03:58:23.310411    3951 main.go:141] libmachine: STDERR: 
	I0930 03:58:23.310424    3951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:23.310429    3951 main.go:141] libmachine: Starting QEMU VM...
	I0930 03:58:23.310439    3951 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:58:23.310472    3951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:24:f9:0c:25:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:23.312199    3951 main.go:141] libmachine: STDOUT: 
	I0930 03:58:23.312214    3951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:58:23.312232    3951 client.go:171] duration metric: took 210.761584ms to LocalClient.Create
	I0930 03:58:25.314444    3951 start.go:128] duration metric: took 2.239134333s to createHost
	I0930 03:58:25.314517    3951 start.go:83] releasing machines lock for "multinode-711000", held for 2.239278583s
	W0930 03:58:25.314614    3951 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:58:25.331570    3951 out.go:177] * Deleting "multinode-711000" in qemu2 ...
	W0930 03:58:25.365547    3951 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:58:25.365568    3951 start.go:729] Will try again in 5 seconds ...
	I0930 03:58:30.367679    3951 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 03:58:30.368111    3951 start.go:364] duration metric: took 359.959µs to acquireMachinesLock for "multinode-711000"
	I0930 03:58:30.368250    3951 start.go:93] Provisioning new machine with config: &{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 03:58:30.368548    3951 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 03:58:30.390193    3951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 03:58:30.441704    3951 start.go:159] libmachine.API.Create for "multinode-711000" (driver="qemu2")
	I0930 03:58:30.441833    3951 client.go:168] LocalClient.Create starting
	I0930 03:58:30.441950    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 03:58:30.442015    3951 main.go:141] libmachine: Decoding PEM data...
	I0930 03:58:30.442034    3951 main.go:141] libmachine: Parsing certificate...
	I0930 03:58:30.442089    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 03:58:30.442135    3951 main.go:141] libmachine: Decoding PEM data...
	I0930 03:58:30.442149    3951 main.go:141] libmachine: Parsing certificate...
	I0930 03:58:30.442805    3951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 03:58:30.614108    3951 main.go:141] libmachine: Creating SSH key...
	I0930 03:58:30.646248    3951 main.go:141] libmachine: Creating Disk image...
	I0930 03:58:30.646253    3951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 03:58:30.646438    3951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:30.655633    3951 main.go:141] libmachine: STDOUT: 
	I0930 03:58:30.655653    3951 main.go:141] libmachine: STDERR: 
	I0930 03:58:30.655722    3951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2 +20000M
	I0930 03:58:30.663501    3951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 03:58:30.663516    3951 main.go:141] libmachine: STDERR: 
	I0930 03:58:30.663527    3951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:30.663530    3951 main.go:141] libmachine: Starting QEMU VM...
	I0930 03:58:30.663551    3951 qemu.go:418] Using hvf for hardware acceleration
	I0930 03:58:30.663587    3951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4e:5b:cd:13:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 03:58:30.665131    3951 main.go:141] libmachine: STDOUT: 
	I0930 03:58:30.665146    3951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 03:58:30.665157    3951 client.go:171] duration metric: took 223.321875ms to LocalClient.Create
	I0930 03:58:32.667337    3951 start.go:128] duration metric: took 2.298751916s to createHost
	I0930 03:58:32.667414    3951 start.go:83] releasing machines lock for "multinode-711000", held for 2.299308041s
	W0930 03:58:32.667954    3951 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 03:58:32.683775    3951 out.go:201] 
	W0930 03:58:32.687708    3951 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 03:58:32.687746    3951 out.go:270] * 
	* 
	W0930 03:58:32.690494    3951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:58:32.706616    3951 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-711000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (68.021875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.585833ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-711000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- rollout status deployment/busybox: exit status 1 (59.495167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.456583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:33.035824    1929 retry.go:31] will retry after 1.379909882s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.588834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:34.523129    1929 retry.go:31] will retry after 1.627404973s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.351875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:36.257229    1929 retry.go:31] will retry after 2.590894038s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.384208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:38.953819    1929 retry.go:31] will retry after 4.895504162s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.89425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:43.956613    1929 retry.go:31] will retry after 6.085970191s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.626042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:50.150474    1929 retry.go:31] will retry after 8.715020262s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.639333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:58:58.970623    1929 retry.go:31] will retry after 10.318055558s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.811625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:59:09.395850    1929 retry.go:31] will retry after 10.604198368s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.632208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:59:20.106935    1929 retry.go:31] will retry after 26.678601104s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.321333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0930 03:59:46.889293    1929 retry.go:31] will retry after 23.309885616s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.477083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.268208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.866667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.793125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.05075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (31.285333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-711000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.233292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.236708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-711000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-711000 -v 3 --alsologtostderr: exit status 83 (44.892959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-711000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-711000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:10.686333    4368 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:10.686487    4368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:10.686490    4368 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:10.686492    4368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:10.686597    4368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:10.686815    4368 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:10.687017    4368 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:10.691901    4368 out.go:177] * The control-plane node multinode-711000 host is not running: state=Stopped
	I0930 04:00:10.697930    4368 out.go:177]   To start a cluster, run: "minikube start -p multinode-711000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-711000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-711000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-711000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.576416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-711000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-711000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-711000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.499208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-711000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-711000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-711000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-711000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.820334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status --output json --alsologtostderr: exit status 7 (30.985916ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-711000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:10.898168    4380 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:10.898289    4380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:10.898292    4380 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:10.898295    4380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:10.898426    4380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:10.898545    4380 out.go:352] Setting JSON to true
	I0930 04:00:10.898562    4380 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:10.898619    4380 notify.go:220] Checking for updates...
	I0930 04:00:10.898751    4380 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:10.898760    4380 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:10.898992    4380 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:10.898996    4380 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:10.898998    4380 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-711000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (31.160042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 node stop m03: exit status 85 (48.942709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-711000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status: exit status 7 (30.614959ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr: exit status 7 (30.950333ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:11.040642    4388 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:11.040813    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.040816    4388 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:11.040818    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.040949    4388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:11.041060    4388 out.go:352] Setting JSON to false
	I0930 04:00:11.041070    4388 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:11.041133    4388 notify.go:220] Checking for updates...
	I0930 04:00:11.041254    4388 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:11.041265    4388 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:11.041509    4388 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:11.041513    4388 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:11.041515    4388 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr": multinode-711000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.863458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.232542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:11.102799    4392 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:11.103037    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.103040    4392 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:11.103042    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.103165    4392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:11.103401    4392 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:11.103591    4392 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:11.107891    4392 out.go:201] 
	W0930 04:00:11.110853    4392 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0930 04:00:11.110858    4392 out.go:270] * 
	* 
	W0930 04:00:11.112630    4392 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:00:11.115855    4392 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0930 04:00:11.102799    4392 out.go:345] Setting OutFile to fd 1 ...
I0930 04:00:11.103037    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 04:00:11.103040    4392 out.go:358] Setting ErrFile to fd 2...
I0930 04:00:11.103042    4392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 04:00:11.103165    4392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 04:00:11.103401    4392 mustload.go:65] Loading cluster: multinode-711000
I0930 04:00:11.103591    4392 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 04:00:11.107891    4392 out.go:201] 
W0930 04:00:11.110853    4392 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0930 04:00:11.110858    4392 out.go:270] * 
* 
W0930 04:00:11.112630    4392 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0930 04:00:11.115855    4392 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-711000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (31.207792ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:11.150441    4394 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:11.150581    4394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.150584    4394 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:11.150587    4394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.150706    4394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:11.150829    4394 out.go:352] Setting JSON to false
	I0930 04:00:11.150840    4394 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:11.150894    4394 notify.go:220] Checking for updates...
	I0930 04:00:11.151040    4394 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:11.151049    4394 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:11.151273    4394 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:11.151276    4394 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:11.151278    4394 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:11.152132    1929 retry.go:31] will retry after 646.708871ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (73.724167ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:11.872697    4396 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:11.872910    4396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.872914    4396 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:11.872918    4396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:11.873080    4396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:11.873231    4396 out.go:352] Setting JSON to false
	I0930 04:00:11.873245    4396 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:11.873286    4396 notify.go:220] Checking for updates...
	I0930 04:00:11.873484    4396 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:11.873506    4396 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:11.873835    4396 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:11.873840    4396 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:11.873843    4396 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:11.874869    1929 retry.go:31] will retry after 1.663425022s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (74.300042ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:13.612439    4398 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:13.612628    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:13.612633    4398 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:13.612636    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:13.612827    4398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:13.612992    4398 out.go:352] Setting JSON to false
	I0930 04:00:13.613013    4398 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:13.613050    4398 notify.go:220] Checking for updates...
	I0930 04:00:13.613290    4398 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:13.613302    4398 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:13.613614    4398 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:13.613619    4398 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:13.613622    4398 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:13.614697    1929 retry.go:31] will retry after 1.2278814s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (72.953083ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:14.915707    4400 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:14.915893    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:14.915897    4400 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:14.915900    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:14.916068    4400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:14.916210    4400 out.go:352] Setting JSON to false
	I0930 04:00:14.916224    4400 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:14.916259    4400 notify.go:220] Checking for updates...
	I0930 04:00:14.916474    4400 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:14.916485    4400 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:14.916825    4400 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:14.916830    4400 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:14.916832    4400 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:14.917866    1929 retry.go:31] will retry after 5.037628363s: exit status 7
E0930 04:00:18.258132    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (74.392625ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:20.030092    4402 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:20.030274    4402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:20.030278    4402 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:20.030281    4402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:20.030451    4402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:20.030598    4402 out.go:352] Setting JSON to false
	I0930 04:00:20.030612    4402 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:20.030653    4402 notify.go:220] Checking for updates...
	I0930 04:00:20.030869    4402 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:20.030881    4402 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:20.031177    4402 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:20.031181    4402 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:20.031184    4402 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:20.032206    1929 retry.go:31] will retry after 3.364132685s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (72.237292ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:23.468681    4404 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:23.468871    4404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:23.468875    4404 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:23.468879    4404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:23.469055    4404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:23.469212    4404 out.go:352] Setting JSON to false
	I0930 04:00:23.469225    4404 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:23.469261    4404 notify.go:220] Checking for updates...
	I0930 04:00:23.469483    4404 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:23.469495    4404 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:23.469840    4404 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:23.469845    4404 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:23.469848    4404 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:23.470934    1929 retry.go:31] will retry after 9.686077443s: exit status 7
E0930 04:00:32.409437    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (74.411666ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:33.231505    4407 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:33.231720    4407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:33.231725    4407 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:33.231728    4407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:33.231881    4407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:33.232099    4407 out.go:352] Setting JSON to false
	I0930 04:00:33.232112    4407 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:33.232161    4407 notify.go:220] Checking for updates...
	I0930 04:00:33.232408    4407 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:33.232418    4407 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:33.232746    4407 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:33.232750    4407 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:33.232753    4407 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:33.233813    1929 retry.go:31] will retry after 6.356945308s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (74.193333ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:00:39.665060    4411 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:00:39.665269    4411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:39.665273    4411 out.go:358] Setting ErrFile to fd 2...
	I0930 04:00:39.665276    4411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:00:39.665438    4411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:00:39.665589    4411 out.go:352] Setting JSON to false
	I0930 04:00:39.665603    4411 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:00:39.665652    4411 notify.go:220] Checking for updates...
	I0930 04:00:39.665912    4411 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:00:39.665923    4411 status.go:174] checking status of multinode-711000 ...
	I0930 04:00:39.666236    4411 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:00:39.666241    4411 status.go:377] host is not running, skipping remaining checks
	I0930 04:00:39.666243    4411 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0930 04:00:39.667251    1929 retry.go:31] will retry after 24.063709244s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr: exit status 7 (74.766291ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:03.805731    4423 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:03.805959    4423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:03.805963    4423 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:03.805966    4423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:03.806130    4423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:03.806296    4423 out.go:352] Setting JSON to false
	I0930 04:01:03.806310    4423 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:01:03.806349    4423 notify.go:220] Checking for updates...
	I0930 04:01:03.806573    4423 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:03.806586    4423 status.go:174] checking status of multinode-711000 ...
	I0930 04:01:03.806913    4423 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:01:03.806917    4423 status.go:377] host is not running, skipping remaining checks
	I0930 04:01:03.806920    4423 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-711000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (33.293875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-711000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-711000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-711000: (3.687911166s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-711000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-711000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22656675s)

                                                
                                                
-- stdout --
	* [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	* Restarting existing qemu2 VM for "multinode-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:07.625591    4447 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:07.625754    4447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:07.625758    4447 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:07.625761    4447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:07.625929    4447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:07.627111    4447 out.go:352] Setting JSON to false
	I0930 04:01:07.646192    4447 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3630,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:01:07.646269    4447 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:01:07.651069    4447 out.go:177] * [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:01:07.658016    4447 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:01:07.658059    4447 notify.go:220] Checking for updates...
	I0930 04:01:07.665839    4447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:01:07.670049    4447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:01:07.673093    4447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:01:07.674459    4447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:01:07.678062    4447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:01:07.681453    4447 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:07.681515    4447 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:01:07.685883    4447 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:01:07.693010    4447 start.go:297] selected driver: qemu2
	I0930 04:01:07.693015    4447 start.go:901] validating driver "qemu2" against &{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:01:07.693065    4447 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:01:07.695551    4447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:01:07.695578    4447 cni.go:84] Creating CNI manager for ""
	I0930 04:01:07.695606    4447 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 04:01:07.695669    4447 start.go:340] cluster config:
	{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:01:07.699576    4447 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:07.707053    4447 out.go:177] * Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	I0930 04:01:07.711066    4447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:01:07.711088    4447 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:01:07.711095    4447 cache.go:56] Caching tarball of preloaded images
	I0930 04:01:07.711158    4447 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:01:07.711165    4447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:01:07.711223    4447 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/multinode-711000/config.json ...
	I0930 04:01:07.711704    4447 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:07.711745    4447 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "multinode-711000"
	I0930 04:01:07.711755    4447 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:01:07.711759    4447 fix.go:54] fixHost starting: 
	I0930 04:01:07.711909    4447 fix.go:112] recreateIfNeeded on multinode-711000: state=Stopped err=<nil>
	W0930 04:01:07.711919    4447 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:01:07.720036    4447 out.go:177] * Restarting existing qemu2 VM for "multinode-711000" ...
	I0930 04:01:07.723942    4447 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:07.723980    4447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4e:5b:cd:13:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 04:01:07.726260    4447 main.go:141] libmachine: STDOUT: 
	I0930 04:01:07.726282    4447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:07.726315    4447 fix.go:56] duration metric: took 14.554708ms for fixHost
	I0930 04:01:07.726320    4447 start.go:83] releasing machines lock for "multinode-711000", held for 14.569334ms
	W0930 04:01:07.726327    4447 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:01:07.726377    4447 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:07.726382    4447 start.go:729] Will try again in 5 seconds ...
	I0930 04:01:12.727710    4447 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:12.728122    4447 start.go:364] duration metric: took 327.209µs to acquireMachinesLock for "multinode-711000"
	I0930 04:01:12.728251    4447 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:01:12.728271    4447 fix.go:54] fixHost starting: 
	I0930 04:01:12.728985    4447 fix.go:112] recreateIfNeeded on multinode-711000: state=Stopped err=<nil>
	W0930 04:01:12.729011    4447 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:01:12.734616    4447 out.go:177] * Restarting existing qemu2 VM for "multinode-711000" ...
	I0930 04:01:12.739513    4447 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:12.739737    4447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4e:5b:cd:13:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 04:01:12.749383    4447 main.go:141] libmachine: STDOUT: 
	I0930 04:01:12.749443    4447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:12.749509    4447 fix.go:56] duration metric: took 21.241ms for fixHost
	I0930 04:01:12.749526    4447 start.go:83] releasing machines lock for "multinode-711000", held for 21.376833ms
	W0930 04:01:12.749722    4447 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:12.758499    4447 out.go:201] 
	W0930 04:01:12.762553    4447 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:01:12.762578    4447 out.go:270] * 
	* 
	W0930 04:01:12.765076    4447 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:01:12.773546    4447 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-711000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-711000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (33.021167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 node delete m03: exit status 83 (41.380625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-711000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-711000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-711000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr: exit status 7 (30.934583ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:12.958537    4461 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:12.958677    4461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:12.958680    4461 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:12.958683    4461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:12.958804    4461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:12.958944    4461 out.go:352] Setting JSON to false
	I0930 04:01:12.958955    4461 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:01:12.959006    4461 notify.go:220] Checking for updates...
	I0930 04:01:12.959168    4461 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:12.959178    4461 status.go:174] checking status of multinode-711000 ...
	I0930 04:01:12.959411    4461 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:01:12.959414    4461 status.go:377] host is not running, skipping remaining checks
	I0930 04:01:12.959416    4461 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (31.306542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-711000 stop: (2.022532292s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status: exit status 7 (65.342209ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr: exit status 7 (33.138875ms)

                                                
                                                
-- stdout --
	multinode-711000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:15.111426    4477 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:15.111568    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:15.111571    4477 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:15.111573    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:15.111721    4477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:15.111852    4477 out.go:352] Setting JSON to false
	I0930 04:01:15.111863    4477 mustload.go:65] Loading cluster: multinode-711000
	I0930 04:01:15.111929    4477 notify.go:220] Checking for updates...
	I0930 04:01:15.112102    4477 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:15.112115    4477 status.go:174] checking status of multinode-711000 ...
	I0930 04:01:15.112362    4477 status.go:364] multinode-711000 host status = "Stopped" (err=<nil>)
	I0930 04:01:15.112366    4477 status.go:377] host is not running, skipping remaining checks
	I0930 04:01:15.112368    4477 status.go:176] multinode-711000 status: &{Name:multinode-711000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr": multinode-711000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-711000 status --alsologtostderr": multinode-711000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (30.929083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-711000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-711000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.189398583s)

                                                
                                                
-- stdout --
	* [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	* Restarting existing qemu2 VM for "multinode-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:15.172600    4481 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:15.172725    4481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:15.172728    4481 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:15.172731    4481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:15.172863    4481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:15.173949    4481 out.go:352] Setting JSON to false
	I0930 04:01:15.190404    4481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3638,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:01:15.190471    4481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:01:15.195842    4481 out.go:177] * [multinode-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:01:15.207856    4481 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:01:15.207892    4481 notify.go:220] Checking for updates...
	I0930 04:01:15.214800    4481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:01:15.217802    4481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:01:15.220738    4481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:01:15.223796    4481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:01:15.226792    4481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:01:15.230019    4481 config.go:182] Loaded profile config "multinode-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:15.230287    4481 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:01:15.234805    4481 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:01:15.240672    4481 start.go:297] selected driver: qemu2
	I0930 04:01:15.240677    4481 start.go:901] validating driver "qemu2" against &{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:01:15.240715    4481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:01:15.242964    4481 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:01:15.242989    4481 cni.go:84] Creating CNI manager for ""
	I0930 04:01:15.243013    4481 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 04:01:15.243068    4481 start.go:340] cluster config:
	{Name:multinode-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-711000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:01:15.246649    4481 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:15.254837    4481 out.go:177] * Starting "multinode-711000" primary control-plane node in "multinode-711000" cluster
	I0930 04:01:15.258808    4481 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:01:15.258831    4481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:01:15.258840    4481 cache.go:56] Caching tarball of preloaded images
	I0930 04:01:15.258909    4481 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:01:15.258915    4481 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:01:15.258968    4481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/multinode-711000/config.json ...
	I0930 04:01:15.259408    4481 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:15.259437    4481 start.go:364] duration metric: took 23.209µs to acquireMachinesLock for "multinode-711000"
	I0930 04:01:15.259445    4481 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:01:15.259450    4481 fix.go:54] fixHost starting: 
	I0930 04:01:15.259564    4481 fix.go:112] recreateIfNeeded on multinode-711000: state=Stopped err=<nil>
	W0930 04:01:15.259572    4481 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:01:15.263859    4481 out.go:177] * Restarting existing qemu2 VM for "multinode-711000" ...
	I0930 04:01:15.271721    4481 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:15.271755    4481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4e:5b:cd:13:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 04:01:15.273771    4481 main.go:141] libmachine: STDOUT: 
	I0930 04:01:15.273792    4481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:15.273823    4481 fix.go:56] duration metric: took 14.371042ms for fixHost
	I0930 04:01:15.273829    4481 start.go:83] releasing machines lock for "multinode-711000", held for 14.387083ms
	W0930 04:01:15.273835    4481 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:01:15.273884    4481 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:15.273889    4481 start.go:729] Will try again in 5 seconds ...
	I0930 04:01:20.275975    4481 start.go:360] acquireMachinesLock for multinode-711000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:20.276332    4481 start.go:364] duration metric: took 299.625µs to acquireMachinesLock for "multinode-711000"
	I0930 04:01:20.276460    4481 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:01:20.276479    4481 fix.go:54] fixHost starting: 
	I0930 04:01:20.277157    4481 fix.go:112] recreateIfNeeded on multinode-711000: state=Stopped err=<nil>
	W0930 04:01:20.277182    4481 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:01:20.281559    4481 out.go:177] * Restarting existing qemu2 VM for "multinode-711000" ...
	I0930 04:01:20.289531    4481 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:20.289695    4481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4e:5b:cd:13:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/multinode-711000/disk.qcow2
	I0930 04:01:20.298772    4481 main.go:141] libmachine: STDOUT: 
	I0930 04:01:20.298826    4481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:20.298931    4481 fix.go:56] duration metric: took 22.417833ms for fixHost
	I0930 04:01:20.298951    4481 start.go:83] releasing machines lock for "multinode-711000", held for 22.595667ms
	W0930 04:01:20.299102    4481 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:20.306515    4481 out.go:201] 
	W0930 04:01:20.310525    4481 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:01:20.310552    4481 out.go:270] * 
	* 
	W0930 04:01:20.313591    4481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:01:20.320563    4481 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-711000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (69.139875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-711000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-711000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-711000-m01 --driver=qemu2 : exit status 80 (9.903390708s)

                                                
                                                
-- stdout --
	* [multinode-711000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-711000-m01" primary control-plane node in "multinode-711000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-711000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-711000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-711000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-711000-m02 --driver=qemu2 : exit status 80 (9.910587125s)

                                                
                                                
-- stdout --
	* [multinode-711000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-711000-m02" primary control-plane node in "multinode-711000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-711000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-711000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-711000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-711000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-711000: exit status 83 (79.991625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-711000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-711000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-711000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-711000 -n multinode-711000: exit status 7 (31.27925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-065000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-065000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.906776042s)

                                                
                                                
-- stdout --
	* [test-preload-065000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-065000" primary control-plane node in "test-preload-065000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-065000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:01:40.600318    4539 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:01:40.600448    4539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:40.600451    4539 out.go:358] Setting ErrFile to fd 2...
	I0930 04:01:40.600454    4539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:01:40.600581    4539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:01:40.601641    4539 out.go:352] Setting JSON to false
	I0930 04:01:40.617721    4539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3663,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:01:40.617799    4539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:01:40.625127    4539 out.go:177] * [test-preload-065000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:01:40.632843    4539 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:01:40.632878    4539 notify.go:220] Checking for updates...
	I0930 04:01:40.639975    4539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:01:40.641582    4539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:01:40.644917    4539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:01:40.647952    4539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:01:40.650946    4539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:01:40.654308    4539 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:01:40.654368    4539 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:01:40.658962    4539 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:01:40.665938    4539 start.go:297] selected driver: qemu2
	I0930 04:01:40.665946    4539 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:01:40.665956    4539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:01:40.668169    4539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:01:40.671980    4539 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:01:40.675028    4539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:01:40.675056    4539 cni.go:84] Creating CNI manager for ""
	I0930 04:01:40.675086    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:01:40.675090    4539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:01:40.675118    4539 start.go:340] cluster config:
	{Name:test-preload-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:01:40.678737    4539 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.686945    4539 out.go:177] * Starting "test-preload-065000" primary control-plane node in "test-preload-065000" cluster
	I0930 04:01:40.689872    4539 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0930 04:01:40.689959    4539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/test-preload-065000/config.json ...
	I0930 04:01:40.689983    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/test-preload-065000/config.json: {Name:mk532ef4a013bad4d1eb9df9a95de253cb82090d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:01:40.690012    4539 cache.go:107] acquiring lock: {Name:mk40bb24f276da084af3362fead279a169db3542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690025    4539 cache.go:107] acquiring lock: {Name:mk905fde41c958a0fae53521c8c74b46b0edc8b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690050    4539 cache.go:107] acquiring lock: {Name:mk7acb8c73bad9c9c7e498e6d74248e23aa7835b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690215    4539 cache.go:107] acquiring lock: {Name:mkcb453b2f845d52974e962c17990f1ff70366bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690305    4539 start.go:360] acquireMachinesLock for test-preload-065000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:40.690301    4539 cache.go:107] acquiring lock: {Name:mk7202dbb2d3d1aed15fc2a83ebb135d1c3152e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690317    4539 cache.go:107] acquiring lock: {Name:mka6ae5a68f2ba3e4ec2b5d47f21dc3267a7eb3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690299    4539 cache.go:107] acquiring lock: {Name:mk044b0dd2254448f360e449895392a5e15aefee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690347    4539 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "test-preload-065000"
	I0930 04:01:40.690329    4539 cache.go:107] acquiring lock: {Name:mkca401910b9935617904fa6fcaecd9c56035b55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:01:40.690428    4539 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 04:01:40.690443    4539 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 04:01:40.690463    4539 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:01:40.690471    4539 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:01:40.690403    4539 start.go:93] Provisioning new machine with config: &{Name:test-preload-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:01:40.690483    4539 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:01:40.690516    4539 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:01:40.690558    4539 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 04:01:40.690628    4539 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 04:01:40.690587    4539 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 04:01:40.698908    4539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:01:40.701657    4539 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 04:01:40.701717    4539 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:01:40.704706    4539 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:01:40.704778    4539 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 04:01:40.704797    4539 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:01:40.704799    4539 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 04:01:40.704799    4539 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 04:01:40.704895    4539 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 04:01:40.717540    4539 start.go:159] libmachine.API.Create for "test-preload-065000" (driver="qemu2")
	I0930 04:01:40.717565    4539 client.go:168] LocalClient.Create starting
	I0930 04:01:40.717640    4539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:01:40.717671    4539 main.go:141] libmachine: Decoding PEM data...
	I0930 04:01:40.717681    4539 main.go:141] libmachine: Parsing certificate...
	I0930 04:01:40.717722    4539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:01:40.717748    4539 main.go:141] libmachine: Decoding PEM data...
	I0930 04:01:40.717759    4539 main.go:141] libmachine: Parsing certificate...
	I0930 04:01:40.718142    4539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:01:40.880296    4539 main.go:141] libmachine: Creating SSH key...
	I0930 04:01:40.932976    4539 main.go:141] libmachine: Creating Disk image...
	I0930 04:01:40.933009    4539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:01:40.933221    4539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:40.943017    4539 main.go:141] libmachine: STDOUT: 
	I0930 04:01:40.943048    4539 main.go:141] libmachine: STDERR: 
	I0930 04:01:40.943113    4539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2 +20000M
	I0930 04:01:40.952241    4539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:01:40.952262    4539 main.go:141] libmachine: STDERR: 
	I0930 04:01:40.952288    4539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:40.952292    4539 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:01:40.952306    4539 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:40.952342    4539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8f:55:5e:2b:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:40.954386    4539 main.go:141] libmachine: STDOUT: 
	I0930 04:01:40.954418    4539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:40.954440    4539 client.go:171] duration metric: took 236.872083ms to LocalClient.Create
	I0930 04:01:42.664662    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0930 04:01:42.799990    4539 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0930 04:01:42.800137    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 04:01:42.860133    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0930 04:01:42.904712    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0930 04:01:42.956043    4539 start.go:128] duration metric: took 2.265561083s to createHost
	I0930 04:01:42.956081    4539 start.go:83] releasing machines lock for "test-preload-065000", held for 2.265752417s
	W0930 04:01:42.956152    4539 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:42.977012    4539 out.go:177] * Deleting "test-preload-065000" in qemu2 ...
	I0930 04:01:42.998350    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0930 04:01:42.998403    4539 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.308245s
	I0930 04:01:42.998444    4539 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0930 04:01:43.017024    4539 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:43.017049    4539 start.go:729] Will try again in 5 seconds ...
	W0930 04:01:43.148008    4539 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0930 04:01:43.148090    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 04:01:43.305656    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0930 04:01:43.347687    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0930 04:01:43.352714    4539 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0930 04:01:44.447792    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0930 04:01:44.447842    4539 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.757574167s
	I0930 04:01:44.447904    4539 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0930 04:01:45.179058    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0930 04:01:45.179106    4539 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.489159625s
	I0930 04:01:45.179135    4539 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0930 04:01:45.846296    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0930 04:01:45.846365    4539 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 5.156425333s
	I0930 04:01:45.846407    4539 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0930 04:01:45.868243    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0930 04:01:45.868282    4539 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.178122083s
	I0930 04:01:45.868305    4539 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0930 04:01:46.864422    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0930 04:01:46.864471    4539 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.174539875s
	I0930 04:01:46.864497    4539 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0930 04:01:48.017179    4539 start.go:360] acquireMachinesLock for test-preload-065000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:01:48.017613    4539 start.go:364] duration metric: took 367.25µs to acquireMachinesLock for "test-preload-065000"
	I0930 04:01:48.017722    4539 start.go:93] Provisioning new machine with config: &{Name:test-preload-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:01:48.017993    4539 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:01:48.025574    4539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:01:48.077673    4539 start.go:159] libmachine.API.Create for "test-preload-065000" (driver="qemu2")
	I0930 04:01:48.077739    4539 client.go:168] LocalClient.Create starting
	I0930 04:01:48.077863    4539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:01:48.077927    4539 main.go:141] libmachine: Decoding PEM data...
	I0930 04:01:48.077946    4539 main.go:141] libmachine: Parsing certificate...
	I0930 04:01:48.077993    4539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:01:48.078037    4539 main.go:141] libmachine: Decoding PEM data...
	I0930 04:01:48.078053    4539 main.go:141] libmachine: Parsing certificate...
	I0930 04:01:48.078555    4539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:01:48.266480    4539 main.go:141] libmachine: Creating SSH key...
	I0930 04:01:48.287423    4539 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0930 04:01:48.287443    4539 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.5973395s
	I0930 04:01:48.287449    4539 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0930 04:01:48.404375    4539 main.go:141] libmachine: Creating Disk image...
	I0930 04:01:48.404383    4539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:01:48.404558    4539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:48.414056    4539 main.go:141] libmachine: STDOUT: 
	I0930 04:01:48.414074    4539 main.go:141] libmachine: STDERR: 
	I0930 04:01:48.414129    4539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2 +20000M
	I0930 04:01:48.422022    4539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:01:48.422037    4539 main.go:141] libmachine: STDERR: 
	I0930 04:01:48.422046    4539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:48.422052    4539 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:01:48.422061    4539 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:01:48.422106    4539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b4:b7:e2:66:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/test-preload-065000/disk.qcow2
	I0930 04:01:48.423860    4539 main.go:141] libmachine: STDOUT: 
	I0930 04:01:48.423873    4539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:01:48.423885    4539 client.go:171] duration metric: took 346.144875ms to LocalClient.Create
	I0930 04:01:50.424356    4539 start.go:128] duration metric: took 2.40634325s to createHost
	I0930 04:01:50.424401    4539 start.go:83] releasing machines lock for "test-preload-065000", held for 2.406797792s
	W0930 04:01:50.424716    4539 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-065000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-065000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:01:50.442521    4539 out.go:201] 
	W0930 04:01:50.447447    4539 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:01:50.447477    4539 out.go:270] * 
	* 
	W0930 04:01:50.450161    4539 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:01:50.462408    4539 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-065000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-30 04:01:50.480932 -0700 PDT m=+2506.193610376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-065000 -n test-preload-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-065000 -n test-preload-065000: exit status 7 (67.243542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-065000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-065000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-065000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-222000 --memory=2048 --driver=qemu2 
E0930 04:01:55.495438    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-222000 --memory=2048 --driver=qemu2 : exit status 80 (9.937773833s)

                                                
                                                
-- stdout --
	* [scheduled-stop-222000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-222000" primary control-plane node in "scheduled-stop-222000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-222000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-222000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-222000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-222000" primary control-plane node in "scheduled-stop-222000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-222000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-222000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-30 04:02:00.571615 -0700 PDT m=+2516.284438626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-222000 -n scheduled-stop-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-222000 -n scheduled-stop-222000: exit status 7 (66.801209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-222000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-222000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (16.18s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1052793568 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1052793568 version: (1.063135375s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-461000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-461000 --memory=2600 --driver=qemu2 : exit status 80 (9.876658792s)

                                                
                                                
-- stdout --
	* [skaffold-461000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-461000" primary control-plane node in "skaffold-461000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-461000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-461000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-461000" primary control-plane node in "skaffold-461000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-461000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-30 04:02:16.754225 -0700 PDT m=+2532.467279251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-461000 -n skaffold-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-461000 -n skaffold-461000: exit status 7 (64.34925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-461000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-461000
--- FAIL: TestSkaffold (16.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (622.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4280074467 start -p running-upgrade-520000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.4280074467 start -p running-upgrade-520000 --memory=2200 --vm-driver=qemu2 : (1m19.799343416s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-520000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0930 04:05:18.253796    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 04:05:32.405383    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-520000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.586551416s)

                                                
                                                
-- stdout --
	* [running-upgrade-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-520000" primary control-plane node in "running-upgrade-520000" cluster
	* Updating the running qemu2 "running-upgrade-520000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:04:23.147159    4929 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:04:23.147320    4929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:04:23.147323    4929 out.go:358] Setting ErrFile to fd 2...
	I0930 04:04:23.147326    4929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:04:23.147456    4929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:04:23.148539    4929 out.go:352] Setting JSON to false
	I0930 04:04:23.164961    4929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3826,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:04:23.165055    4929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:04:23.170779    4929 out.go:177] * [running-upgrade-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:04:23.178738    4929 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:04:23.178772    4929 notify.go:220] Checking for updates...
	I0930 04:04:23.186799    4929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:04:23.190767    4929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:04:23.193730    4929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:04:23.196733    4929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:04:23.199753    4929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:04:23.203082    4929 config.go:182] Loaded profile config "running-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:04:23.206753    4929 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 04:04:23.209733    4929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:04:23.213794    4929 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:04:23.220720    4929 start.go:297] selected driver: qemu2
	I0930 04:04:23.220724    4929 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:04:23.220770    4929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:04:23.222968    4929 cni.go:84] Creating CNI manager for ""
	I0930 04:04:23.223001    4929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:04:23.223022    4929 start.go:340] cluster config:
	{Name:running-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:04:23.223066    4929 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:04:23.230732    4929 out.go:177] * Starting "running-upgrade-520000" primary control-plane node in "running-upgrade-520000" cluster
	I0930 04:04:23.234771    4929 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:04:23.234789    4929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0930 04:04:23.234795    4929 cache.go:56] Caching tarball of preloaded images
	I0930 04:04:23.234844    4929 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:04:23.234850    4929 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0930 04:04:23.234899    4929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/config.json ...
	I0930 04:04:23.235395    4929 start.go:360] acquireMachinesLock for running-upgrade-520000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:04:23.235422    4929 start.go:364] duration metric: took 21.042µs to acquireMachinesLock for "running-upgrade-520000"
	I0930 04:04:23.235429    4929 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:04:23.235434    4929 fix.go:54] fixHost starting: 
	I0930 04:04:23.236042    4929 fix.go:112] recreateIfNeeded on running-upgrade-520000: state=Running err=<nil>
	W0930 04:04:23.236050    4929 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:04:23.243714    4929 out.go:177] * Updating the running qemu2 "running-upgrade-520000" VM ...
	I0930 04:04:23.247737    4929 machine.go:93] provisionDockerMachine start ...
	I0930 04:04:23.247776    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.247879    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.247884    4929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 04:04:23.312615    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-520000
	
	I0930 04:04:23.312628    4929 buildroot.go:166] provisioning hostname "running-upgrade-520000"
	I0930 04:04:23.312672    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.312773    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.312779    4929 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-520000 && echo "running-upgrade-520000" | sudo tee /etc/hostname
	I0930 04:04:23.380876    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-520000
	
	I0930 04:04:23.380930    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.381038    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.381046    4929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-520000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-520000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-520000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 04:04:23.445257    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 04:04:23.445273    4929 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19734-1406/.minikube CaCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19734-1406/.minikube}
	I0930 04:04:23.445281    4929 buildroot.go:174] setting up certificates
	I0930 04:04:23.445286    4929 provision.go:84] configureAuth start
	I0930 04:04:23.445292    4929 provision.go:143] copyHostCerts
	I0930 04:04:23.445350    4929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem, removing ...
	I0930 04:04:23.445356    4929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem
	I0930 04:04:23.445486    4929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem (1078 bytes)
	I0930 04:04:23.445647    4929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem, removing ...
	I0930 04:04:23.445651    4929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem
	I0930 04:04:23.445692    4929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem (1123 bytes)
	I0930 04:04:23.445784    4929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem, removing ...
	I0930 04:04:23.445787    4929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem
	I0930 04:04:23.445824    4929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem (1675 bytes)
	I0930 04:04:23.445904    4929 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-520000 san=[127.0.0.1 localhost minikube running-upgrade-520000]
	I0930 04:04:23.539662    4929 provision.go:177] copyRemoteCerts
	I0930 04:04:23.539707    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 04:04:23.539714    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:04:23.574779    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 04:04:23.581733    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 04:04:23.588426    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 04:04:23.595622    4929 provision.go:87] duration metric: took 150.33125ms to configureAuth
	I0930 04:04:23.595632    4929 buildroot.go:189] setting minikube options for container-runtime
	I0930 04:04:23.595735    4929 config.go:182] Loaded profile config "running-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:04:23.595771    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.595855    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.595860    4929 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 04:04:23.661478    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0930 04:04:23.661487    4929 buildroot.go:70] root file system type: tmpfs
	I0930 04:04:23.661533    4929 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 04:04:23.661594    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.661711    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.661744    4929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 04:04:23.732810    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 04:04:23.732873    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.732990    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.732999    4929 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 04:04:23.797495    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 04:04:23.797509    4929 machine.go:96] duration metric: took 549.774042ms to provisionDockerMachine
	I0930 04:04:23.797515    4929 start.go:293] postStartSetup for "running-upgrade-520000" (driver="qemu2")
	I0930 04:04:23.797521    4929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 04:04:23.797578    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 04:04:23.797586    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:04:23.832094    4929 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 04:04:23.833393    4929 info.go:137] Remote host: Buildroot 2021.02.12
	I0930 04:04:23.833402    4929 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/addons for local assets ...
	I0930 04:04:23.833478    4929 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/files for local assets ...
	I0930 04:04:23.833591    4929 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem -> 19292.pem in /etc/ssl/certs
	I0930 04:04:23.833687    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 04:04:23.837017    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:04:23.844239    4929 start.go:296] duration metric: took 46.718834ms for postStartSetup
	I0930 04:04:23.844263    4929 fix.go:56] duration metric: took 608.838708ms for fixHost
	I0930 04:04:23.844318    4929 main.go:141] libmachine: Using SSH client type: native
	I0930 04:04:23.844426    4929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fe9c00] 0x100fec440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0930 04:04:23.844430    4929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 04:04:23.907166    4929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694264.237399432
	
	I0930 04:04:23.907173    4929 fix.go:216] guest clock: 1727694264.237399432
	I0930 04:04:23.907177    4929 fix.go:229] Guest: 2024-09-30 04:04:24.237399432 -0700 PDT Remote: 2024-09-30 04:04:23.844265 -0700 PDT m=+0.716567084 (delta=393.134432ms)
	I0930 04:04:23.907188    4929 fix.go:200] guest clock delta is within tolerance: 393.134432ms
	I0930 04:04:23.907191    4929 start.go:83] releasing machines lock for "running-upgrade-520000", held for 671.774667ms
	I0930 04:04:23.907257    4929 ssh_runner.go:195] Run: cat /version.json
	I0930 04:04:23.907266    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:04:23.907258    4929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 04:04:23.907289    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	W0930 04:04:23.907828    4929 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50253: connect: connection refused
	I0930 04:04:23.907847    4929 retry.go:31] will retry after 209.724711ms: dial tcp [::1]:50253: connect: connection refused
	W0930 04:04:24.158526    4929 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0930 04:04:24.158703    4929 ssh_runner.go:195] Run: systemctl --version
	I0930 04:04:24.161751    4929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 04:04:24.164462    4929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 04:04:24.164512    4929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0930 04:04:24.168835    4929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0930 04:04:24.174842    4929 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 04:04:24.174850    4929 start.go:495] detecting cgroup driver to use...
	I0930 04:04:24.174933    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:04:24.181046    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0930 04:04:24.184670    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 04:04:24.188051    4929 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 04:04:24.188084    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 04:04:24.191271    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:04:24.194265    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 04:04:24.197041    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:04:24.200085    4929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 04:04:24.203314    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 04:04:24.206164    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 04:04:24.208905    4929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 04:04:24.212276    4929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 04:04:24.215401    4929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 04:04:24.217908    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:24.307603    4929 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 04:04:24.314034    4929 start.go:495] detecting cgroup driver to use...
	I0930 04:04:24.314116    4929 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 04:04:24.322689    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:04:24.326995    4929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 04:04:24.332853    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:04:24.337669    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 04:04:24.342156    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:04:24.347719    4929 ssh_runner.go:195] Run: which cri-dockerd
	I0930 04:04:24.348885    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 04:04:24.351967    4929 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0930 04:04:24.356856    4929 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 04:04:24.449907    4929 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 04:04:24.540178    4929 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 04:04:24.540238    4929 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 04:04:24.545767    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:24.634853    4929 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:04:27.151533    4929 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.516700167s)
	I0930 04:04:27.151613    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 04:04:27.156408    4929 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0930 04:04:27.163030    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:04:27.168245    4929 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 04:04:27.250090    4929 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 04:04:27.330968    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:27.414023    4929 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 04:04:27.420347    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:04:27.425242    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:27.505033    4929 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 04:04:27.544247    4929 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 04:04:27.544353    4929 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 04:04:27.546505    4929 start.go:563] Will wait 60s for crictl version
	I0930 04:04:27.546561    4929 ssh_runner.go:195] Run: which crictl
	I0930 04:04:27.548205    4929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 04:04:27.560621    4929 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0930 04:04:27.560698    4929 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:04:27.573512    4929 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:04:27.594264    4929 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0930 04:04:27.594414    4929 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0930 04:04:27.595748    4929 kubeadm.go:883] updating cluster {Name:running-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0930 04:04:27.595792    4929 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:04:27.595841    4929 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:04:27.606005    4929 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:04:27.606013    4929 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:04:27.606065    4929 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:04:27.609016    4929 ssh_runner.go:195] Run: which lz4
	I0930 04:04:27.610372    4929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 04:04:27.611534    4929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 04:04:27.611543    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0930 04:04:28.601225    4929 docker.go:649] duration metric: took 990.91325ms to copy over tarball
	I0930 04:04:28.601285    4929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 04:04:29.690929    4929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.089645542s)
	I0930 04:04:29.690943    4929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 04:04:29.706642    4929 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:04:29.709991    4929 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0930 04:04:29.715256    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:29.795671    4929 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:04:31.428762    4929 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.63309675s)
	I0930 04:04:31.428886    4929 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:04:31.440299    4929 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:04:31.440308    4929 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:04:31.440313    4929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 04:04:31.445808    4929 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:04:31.447517    4929 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:04:31.449744    4929 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:04:31.449761    4929 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:04:31.451340    4929 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:04:31.451423    4929 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:04:31.452576    4929 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:04:31.452598    4929 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:04:31.453891    4929 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:04:31.453905    4929 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:04:31.455156    4929 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 04:04:31.455153    4929 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:04:31.456563    4929 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:04:31.456636    4929 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:04:31.457494    4929 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 04:04:31.458109    4929 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:04:33.386291    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:04:33.423616    4929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0930 04:04:33.423666    4929 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:04:33.423796    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:04:33.444748    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0930 04:04:33.452924    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0930 04:04:33.468130    4929 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0930 04:04:33.468152    4929 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0930 04:04:33.468222    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0930 04:04:33.480522    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0930 04:04:33.480662    4929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0930 04:04:33.482361    4929 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0930 04:04:33.482374    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0930 04:04:33.489246    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:04:33.490969    4929 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0930 04:04:33.490980    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0930 04:04:33.500934    4929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0930 04:04:33.500958    4929 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:04:33.501029    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:04:33.527543    4929 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0930 04:04:33.527579    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0930 04:04:33.533404    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:04:33.542807    4929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0930 04:04:33.542838    4929 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:04:33.542910    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:04:33.552788    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0930 04:04:33.855377    4929 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0930 04:04:33.856084    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:04:33.911499    4929 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0930 04:04:33.911535    4929 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:04:33.911641    4929 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:04:33.937300    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 04:04:33.937454    4929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:04:33.939355    4929 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0930 04:04:33.939370    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0930 04:04:33.970943    4929 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:04:33.970955    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W0930 04:04:34.016628    4929 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0930 04:04:34.016760    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:04:34.040911    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0930 04:04:34.043511    4929 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:04:34.214563    4929 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 04:04:34.214602    4929 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0930 04:04:34.214608    4929 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0930 04:04:34.214613    4929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0930 04:04:34.214621    4929 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:04:34.214622    4929 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:04:34.214621    4929 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:04:34.214686    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:04:34.214686    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0930 04:04:34.214713    4929 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:04:34.233515    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 04:04:34.233645    4929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:04:34.239192    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0930 04:04:34.239210    4929 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0930 04:04:34.239847    4929 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0930 04:04:34.239862    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0930 04:04:34.278871    4929 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:04:34.278883    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0930 04:04:34.320867    4929 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0930 04:04:34.320907    4929 cache_images.go:92] duration metric: took 2.880629083s to LoadCachedImages
	W0930 04:04:34.320946    4929 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0930 04:04:34.320952    4929 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0930 04:04:34.321005    4929 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-520000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 04:04:34.321084    4929 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 04:04:34.334034    4929 cni.go:84] Creating CNI manager for ""
	I0930 04:04:34.334054    4929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:04:34.334067    4929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 04:04:34.334076    4929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-520000 NodeName:running-upgrade-520000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 04:04:34.334141    4929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-520000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 04:04:34.334208    4929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0930 04:04:34.337433    4929 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 04:04:34.337463    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 04:04:34.340042    4929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0930 04:04:34.344644    4929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 04:04:34.349175    4929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0930 04:04:34.354319    4929 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0930 04:04:34.355592    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:04:34.439339    4929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:04:34.444446    4929 certs.go:68] Setting up /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000 for IP: 10.0.2.15
	I0930 04:04:34.444454    4929 certs.go:194] generating shared ca certs ...
	I0930 04:04:34.444462    4929 certs.go:226] acquiring lock for ca certs: {Name:mkeec9701f93539137211ace80b844b19e48dcd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:04:34.444617    4929 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key
	I0930 04:04:34.444653    4929 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key
	I0930 04:04:34.444667    4929 certs.go:256] generating profile certs ...
	I0930 04:04:34.444734    4929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.key
	I0930 04:04:34.444751    4929 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key.b08b29dc
	I0930 04:04:34.444762    4929 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt.b08b29dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0930 04:04:34.508593    4929 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt.b08b29dc ...
	I0930 04:04:34.508598    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt.b08b29dc: {Name:mkb0e5c67820f5bbb9f7852aef677749d1a8c06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:04:34.509052    4929 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key.b08b29dc ...
	I0930 04:04:34.509058    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key.b08b29dc: {Name:mkf8d3937216e97a581955ee0dbf819ba85c38c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:04:34.509214    4929 certs.go:381] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt.b08b29dc -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt
	I0930 04:04:34.509347    4929 certs.go:385] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key.b08b29dc -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key
	I0930 04:04:34.509478    4929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/proxy-client.key
	I0930 04:04:34.509597    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem (1338 bytes)
	W0930 04:04:34.509618    4929 certs.go:480] ignoring /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929_empty.pem, impossibly tiny 0 bytes
	I0930 04:04:34.509623    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 04:04:34.509641    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem (1078 bytes)
	I0930 04:04:34.509659    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem (1123 bytes)
	I0930 04:04:34.509677    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem (1675 bytes)
	I0930 04:04:34.509715    4929 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:04:34.510033    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 04:04:34.517847    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 04:04:34.524972    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 04:04:34.531874    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 04:04:34.538791    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 04:04:34.545928    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 04:04:34.552536    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 04:04:34.559383    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 04:04:34.566958    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /usr/share/ca-certificates/19292.pem (1708 bytes)
	I0930 04:04:34.573842    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 04:04:34.580269    4929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem --> /usr/share/ca-certificates/1929.pem (1338 bytes)
	I0930 04:04:34.587485    4929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 04:04:34.592537    4929 ssh_runner.go:195] Run: openssl version
	I0930 04:04:34.594342    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 04:04:34.597168    4929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:04:34.598650    4929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:04:34.598674    4929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:04:34.600476    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 04:04:34.603440    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1929.pem && ln -fs /usr/share/ca-certificates/1929.pem /etc/ssl/certs/1929.pem"
	I0930 04:04:34.606374    4929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1929.pem
	I0930 04:04:34.607667    4929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 10:37 /usr/share/ca-certificates/1929.pem
	I0930 04:04:34.607686    4929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1929.pem
	I0930 04:04:34.609334    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1929.pem /etc/ssl/certs/51391683.0"
	I0930 04:04:34.612398    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19292.pem && ln -fs /usr/share/ca-certificates/19292.pem /etc/ssl/certs/19292.pem"
	I0930 04:04:34.615685    4929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19292.pem
	I0930 04:04:34.617064    4929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 10:37 /usr/share/ca-certificates/19292.pem
	I0930 04:04:34.617088    4929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19292.pem
	I0930 04:04:34.618892    4929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19292.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 04:04:34.621444    4929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 04:04:34.623241    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 04:04:34.624913    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 04:04:34.626654    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 04:04:34.628277    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 04:04:34.630193    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 04:04:34.631882    4929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 04:04:34.633565    4929 kubeadm.go:392] StartCluster: {Name:running-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:04:34.633639    4929 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:04:34.644314    4929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 04:04:34.647615    4929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 04:04:34.647640    4929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 04:04:34.647670    4929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 04:04:34.650270    4929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:04:34.650496    4929 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-520000" does not appear in /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:04:34.650548    4929 kubeconfig.go:62] /Users/jenkins/minikube-integration/19734-1406/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-520000" cluster setting kubeconfig missing "running-upgrade-520000" context setting]
	I0930 04:04:34.650684    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:04:34.651348    4929 kapi.go:59] client config for running-upgrade-520000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:04:34.651676    4929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 04:04:34.654311    4929 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-520000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0930 04:04:34.654318    4929 kubeadm.go:1160] stopping kube-system containers ...
	I0930 04:04:34.654365    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:04:34.665007    4929 docker.go:483] Stopping containers: [89bd4bd04437 572f3789de32 7415892dfabc 822d4ecfb185 8df176c83bba a764d080a1f9 84d0d4bf2c9f 9c70f688661d f856ca1ca41a a262cc6fe37d 3047389e9f15 4b0efee96daa 8408bdfbfd17 987126e47b79]
	I0930 04:04:34.665078    4929 ssh_runner.go:195] Run: docker stop 89bd4bd04437 572f3789de32 7415892dfabc 822d4ecfb185 8df176c83bba a764d080a1f9 84d0d4bf2c9f 9c70f688661d f856ca1ca41a a262cc6fe37d 3047389e9f15 4b0efee96daa 8408bdfbfd17 987126e47b79
	I0930 04:04:34.676555    4929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 04:04:34.762614    4929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:04:34.766808    4929 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 30 11:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 30 11:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 30 11:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 30 11:04 /etc/kubernetes/scheduler.conf
	
	I0930 04:04:34.766846    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0930 04:04:34.770202    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:04:34.770234    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:04:34.773136    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0930 04:04:34.776391    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:04:34.776416    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:04:34.779995    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0930 04:04:34.783146    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:04:34.783169    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:04:34.785906    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0930 04:04:34.788607    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:04:34.788631    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:04:34.791705    4929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:04:34.794638    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:04:34.815121    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:04:35.234156    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:04:35.423003    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:04:35.445244    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:04:35.467522    4929 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:04:35.467610    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:04:35.969993    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:04:36.469648    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:04:36.969655    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:04:36.973732    4929 api_server.go:72] duration metric: took 1.506233625s to wait for apiserver process to appear ...
	I0930 04:04:36.973742    4929 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:04:36.973760    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:04:41.975909    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:04:41.976018    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:04:46.977059    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:04:46.977222    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:04:51.978416    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:04:51.978462    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:04:56.979601    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:04:56.979696    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:01.981563    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:01.981661    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:06.983846    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:06.983948    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:11.985681    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:11.985785    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:16.986669    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:16.986774    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:21.989143    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:21.989237    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:26.991905    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:26.992001    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:31.994779    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:31.994865    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:36.996174    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:36.996752    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:05:37.037803    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:05:37.037988    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:05:37.058792    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:05:37.058932    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:05:37.074312    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:05:37.074412    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:05:37.086773    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:05:37.086863    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:05:37.098305    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:05:37.098384    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:05:37.109592    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:05:37.109669    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:05:37.120023    4929 logs.go:276] 0 containers: []
	W0930 04:05:37.120034    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:05:37.120095    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:05:37.130572    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:05:37.130588    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:05:37.130593    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:05:37.145389    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:05:37.145402    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:05:37.157321    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:05:37.157334    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:05:37.168673    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:05:37.168684    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:05:37.207271    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:05:37.207278    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:05:37.227335    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:05:37.227346    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:05:37.246258    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:05:37.246272    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:05:37.257678    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:05:37.257687    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:05:37.275035    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:05:37.275046    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:05:37.287457    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:05:37.287470    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:05:37.355258    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:05:37.355268    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:05:37.371700    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:05:37.371710    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:05:37.382866    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:05:37.382876    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:05:37.409059    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:05:37.409067    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:05:37.421215    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:05:37.421225    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:05:37.435077    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:05:37.435088    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:05:37.448905    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:05:37.448915    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:05:39.955287    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:44.957985    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:44.958556    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:05:44.999315    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:05:44.999483    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:05:45.020689    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:05:45.020804    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:05:45.035387    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:05:45.035493    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:05:45.047734    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:05:45.047815    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:05:45.063037    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:05:45.063108    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:05:45.073213    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:05:45.073289    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:05:45.083784    4929 logs.go:276] 0 containers: []
	W0930 04:05:45.083796    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:05:45.083857    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:05:45.094365    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:05:45.094380    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:05:45.094385    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:05:45.106254    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:05:45.106268    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:05:45.119123    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:05:45.119135    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:05:45.160507    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:05:45.160517    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:05:45.196339    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:05:45.196352    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:05:45.210574    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:05:45.210584    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:05:45.223961    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:05:45.223971    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:05:45.241426    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:05:45.241438    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:05:45.258498    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:05:45.258507    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:05:45.263047    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:05:45.263054    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:05:45.277709    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:05:45.277719    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:05:45.303216    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:05:45.303223    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:05:45.315123    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:05:45.315136    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:05:45.326697    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:05:45.326709    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:05:45.338191    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:05:45.338203    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:05:45.357864    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:05:45.357873    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:05:45.370283    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:05:45.370296    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:05:47.884273    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:05:52.887079    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:05:52.887668    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:05:52.927870    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:05:52.928048    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:05:52.949980    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:05:52.950122    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:05:52.964970    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:05:52.965064    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:05:52.977146    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:05:52.977220    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:05:52.987951    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:05:52.988018    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:05:52.998897    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:05:52.998981    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:05:53.009278    4929 logs.go:276] 0 containers: []
	W0930 04:05:53.009290    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:05:53.009359    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:05:53.025282    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:05:53.025300    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:05:53.025305    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:05:53.039123    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:05:53.039133    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:05:53.051158    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:05:53.051170    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:05:53.068372    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:05:53.068382    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:05:53.080668    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:05:53.080676    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:05:53.091936    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:05:53.091946    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:05:53.111122    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:05:53.111135    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:05:53.115999    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:05:53.116009    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:05:53.150667    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:05:53.150681    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:05:53.167723    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:05:53.167733    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:05:53.185295    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:05:53.185306    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:05:53.197248    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:05:53.197259    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:05:53.238024    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:05:53.238031    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:05:53.249401    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:05:53.249414    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:05:53.272760    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:05:53.272766    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:05:53.283399    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:05:53.283411    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:05:53.295156    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:05:53.295166    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:05:55.811564    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:00.814198    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:00.814381    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:00.828975    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:00.829057    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:00.839588    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:00.839666    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:00.850196    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:00.850268    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:00.865130    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:00.865211    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:00.875430    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:00.875497    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:00.889927    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:00.890006    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:00.905838    4929 logs.go:276] 0 containers: []
	W0930 04:06:00.905849    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:00.905919    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:00.924861    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:00.924879    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:00.924885    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:00.938799    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:00.938821    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:00.954177    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:00.954186    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:00.965341    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:00.965351    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:00.977461    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:00.977470    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:00.994612    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:00.994621    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:01.014471    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:01.014480    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:01.025593    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:01.025602    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:01.050934    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:01.050940    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:01.069516    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:01.069525    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:01.086288    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:01.086296    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:01.111424    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:01.111435    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:01.125555    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:01.125565    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:01.164864    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:01.164873    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:01.169862    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:01.169870    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:01.204099    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:01.204108    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:01.217625    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:01.217634    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:03.731038    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:08.733522    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:08.734089    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:08.780093    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:08.780252    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:08.802618    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:08.802738    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:08.817924    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:08.818025    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:08.831843    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:08.831908    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:08.842537    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:08.842611    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:08.853114    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:08.853195    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:08.863238    4929 logs.go:276] 0 containers: []
	W0930 04:06:08.863248    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:08.863309    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:08.874201    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:08.874218    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:08.874223    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:08.888865    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:08.888874    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:08.907517    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:08.907530    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:08.918747    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:08.918757    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:08.936658    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:08.936669    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:08.955059    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:08.955071    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:08.966289    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:08.966298    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:08.978033    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:08.978043    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:09.019128    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:09.019136    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:09.023687    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:09.023696    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:09.035929    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:09.035942    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:09.048217    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:09.048227    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:09.071987    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:09.071994    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:09.110721    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:09.110731    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:09.127167    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:09.127176    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:09.141386    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:09.141396    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:09.160788    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:09.160802    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:11.675618    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:16.677972    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:16.678560    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:16.718821    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:16.718988    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:16.740610    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:16.740724    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:16.755689    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:16.755784    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:16.768454    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:16.768537    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:16.783564    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:16.783645    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:16.794454    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:16.794537    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:16.804748    4929 logs.go:276] 0 containers: []
	W0930 04:06:16.804760    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:16.804824    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:16.815331    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:16.815346    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:16.815352    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:16.829721    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:16.829729    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:16.849461    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:16.849472    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:16.870357    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:16.870366    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:16.875303    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:16.875311    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:16.887057    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:16.887070    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:16.899354    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:16.899365    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:16.911487    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:16.911497    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:16.922961    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:16.922971    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:16.938228    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:16.938237    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:16.950545    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:16.950554    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:16.961755    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:16.961764    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:16.996373    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:16.996387    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:17.010349    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:17.010362    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:17.024351    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:17.024363    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:17.041242    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:17.041252    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:17.066374    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:17.066382    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:19.609634    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:24.612044    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:24.612346    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:24.643180    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:24.643354    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:24.662702    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:24.662803    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:24.676509    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:24.676594    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:24.688549    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:24.688623    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:24.698864    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:24.698943    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:24.709756    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:24.709836    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:24.720003    4929 logs.go:276] 0 containers: []
	W0930 04:06:24.720015    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:24.720079    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:24.730452    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:24.730472    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:24.730477    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:24.742690    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:24.742701    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:24.754234    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:24.754250    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:24.793075    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:24.793087    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:24.812263    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:24.812272    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:24.826186    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:24.826197    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:24.838114    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:24.838125    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:24.854960    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:24.854969    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:24.869462    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:24.869473    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:24.886413    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:24.886425    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:24.912652    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:24.912671    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:24.948163    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:24.948174    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:24.960904    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:24.960914    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:24.972631    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:24.972640    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:24.983850    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:24.983862    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:24.988351    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:24.988359    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:25.002460    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:25.002470    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:27.516875    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:32.519409    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:32.519603    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:32.534155    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:32.534250    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:32.545589    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:32.545681    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:32.556255    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:32.556339    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:32.566937    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:32.567019    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:32.577446    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:32.577527    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:32.587738    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:32.587822    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:32.597585    4929 logs.go:276] 0 containers: []
	W0930 04:06:32.597598    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:32.597668    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:32.612186    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:32.612205    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:32.612227    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:32.636298    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:32.636305    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:32.676199    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:32.676206    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:32.689926    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:32.689936    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:32.703999    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:32.704009    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:32.719040    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:32.719053    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:32.738501    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:32.738514    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:32.765848    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:32.765860    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:32.777595    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:32.777608    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:32.792565    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:32.792576    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:32.810517    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:32.810530    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:32.822154    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:32.822167    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:32.859166    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:32.859179    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:32.875780    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:32.875790    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:32.887123    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:32.887134    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:32.898723    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:32.898736    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:32.903896    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:32.903902    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:35.418026    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:40.418974    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:40.419163    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:40.433347    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:40.433438    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:40.444948    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:40.445024    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:40.455684    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:40.455770    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:40.469497    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:40.469591    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:40.481003    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:40.481088    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:40.492375    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:40.492463    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:40.502759    4929 logs.go:276] 0 containers: []
	W0930 04:06:40.502774    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:40.502843    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:40.513176    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:40.513195    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:40.513200    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:40.526890    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:40.526899    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:40.545033    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:40.545046    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:40.557532    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:40.557542    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:40.561859    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:40.561868    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:40.576078    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:40.576087    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:40.588071    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:40.588082    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:40.605502    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:40.605512    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:40.617034    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:40.617045    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:40.641284    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:40.641291    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:40.677499    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:40.677512    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:40.696831    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:40.696841    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:40.710343    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:40.710354    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:40.722155    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:40.722165    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:40.734365    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:40.734375    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:40.773767    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:40.773775    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:40.785645    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:40.785657    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:43.298950    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:48.301602    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:48.301728    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:48.312583    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:48.312678    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:48.328286    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:48.328370    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:48.339797    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:48.339900    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:48.351921    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:48.352012    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:48.363300    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:48.363391    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:48.378774    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:48.378860    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:48.389792    4929 logs.go:276] 0 containers: []
	W0930 04:06:48.389804    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:48.389886    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:48.401585    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:48.401606    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:48.401612    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:48.415858    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:48.415871    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:48.420562    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:48.420573    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:48.435355    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:48.435366    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:48.454835    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:48.454847    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:48.473888    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:48.473906    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:48.486171    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:48.486182    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:48.510023    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:48.510036    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:48.548630    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:48.548641    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:48.559876    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:48.559890    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:48.578801    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:48.578813    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:48.592400    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:48.592411    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:48.634853    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:48.634867    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:48.648999    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:48.649019    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:48.664201    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:48.664214    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:48.682291    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:48.682303    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:48.694698    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:48.694710    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:51.208357    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:06:56.210692    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:06:56.211012    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:06:56.233374    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:06:56.233500    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:06:56.249505    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:06:56.249610    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:06:56.262286    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:06:56.262375    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:06:56.273294    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:06:56.273375    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:06:56.285254    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:06:56.285334    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:06:56.296127    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:06:56.296212    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:06:56.306351    4929 logs.go:276] 0 containers: []
	W0930 04:06:56.306362    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:06:56.306432    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:06:56.316927    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:06:56.316951    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:06:56.316957    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:06:56.328367    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:06:56.328377    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:06:56.339876    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:06:56.339885    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:06:56.374621    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:06:56.374632    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:06:56.394243    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:06:56.394253    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:06:56.420176    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:06:56.420191    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:06:56.458794    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:06:56.458802    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:06:56.472532    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:06:56.472540    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:06:56.495566    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:06:56.495574    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:06:56.506849    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:06:56.506859    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:06:56.524019    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:06:56.524029    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:06:56.536186    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:06:56.536201    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:06:56.548941    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:06:56.548952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:06:56.560134    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:06:56.560146    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:06:56.571656    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:06:56.571668    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:06:56.576278    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:06:56.576282    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:06:56.590301    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:06:56.590311    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:06:59.105240    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:04.107536    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:04.107677    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:04.119011    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:04.119101    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:04.130066    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:04.130153    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:04.141300    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:04.141382    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:04.156784    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:04.156883    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:04.168330    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:04.168412    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:04.182868    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:04.182956    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:04.199057    4929 logs.go:276] 0 containers: []
	W0930 04:07:04.199070    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:04.199138    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:04.210746    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:04.210791    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:04.210797    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:04.216097    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:04.216111    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:04.253345    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:04.253356    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:04.268236    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:04.268246    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:04.292194    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:04.292210    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:04.307626    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:04.307643    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:04.321518    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:04.321530    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:04.333764    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:04.333778    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:04.349431    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:04.349445    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:04.361462    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:04.361474    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:04.373264    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:04.373277    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:04.393132    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:04.393146    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:04.406280    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:04.406294    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:04.449010    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:04.449025    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:04.467806    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:04.467817    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:04.486560    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:04.486572    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:04.498239    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:04.498252    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:07.013674    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:12.015988    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:12.016579    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:12.056098    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:12.056293    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:12.078302    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:12.078440    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:12.094193    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:12.094292    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:12.106110    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:12.106181    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:12.120746    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:12.120821    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:12.131871    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:12.131970    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:12.142310    4929 logs.go:276] 0 containers: []
	W0930 04:07:12.142322    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:12.142410    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:12.152741    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:12.152758    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:12.152763    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:12.163449    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:12.163460    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:12.181437    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:12.181448    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:12.196549    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:12.196559    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:12.207987    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:12.207996    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:12.231179    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:12.231193    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:12.245415    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:12.245425    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:12.264486    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:12.264496    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:12.281704    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:12.281715    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:12.295473    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:12.295483    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:12.329656    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:12.329666    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:12.341504    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:12.341514    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:12.379862    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:12.379872    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:12.384246    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:12.384251    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:12.408026    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:12.408035    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:12.421801    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:12.421812    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:12.439026    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:12.439038    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:14.954407    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:19.955341    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:19.955475    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:19.970165    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:19.970249    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:19.980986    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:19.981076    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:19.992738    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:19.992811    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:20.003220    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:20.003306    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:20.014110    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:20.014182    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:20.025260    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:20.025342    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:20.035583    4929 logs.go:276] 0 containers: []
	W0930 04:07:20.035596    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:20.035665    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:20.046852    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:20.046871    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:20.046877    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:20.058979    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:20.058989    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:20.083826    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:20.083842    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:20.124902    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:20.124915    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:20.159478    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:20.159489    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:20.180131    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:20.180143    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:20.204289    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:20.204305    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:20.216051    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:20.216061    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:20.230568    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:20.230578    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:20.247164    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:20.247172    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:20.266954    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:20.266965    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:20.283421    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:20.283434    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:20.305010    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:20.305021    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:20.317356    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:20.317369    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:20.329113    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:20.329125    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:20.333312    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:20.333321    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:20.348838    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:20.348848    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:22.862285    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:27.864944    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:27.865150    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:27.880879    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:27.880972    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:27.891911    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:27.891999    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:27.902680    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:27.902760    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:27.913426    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:27.913514    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:27.923766    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:27.923845    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:27.934441    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:27.934517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:27.944546    4929 logs.go:276] 0 containers: []
	W0930 04:07:27.944557    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:27.944627    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:27.959239    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:27.959260    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:27.959265    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:27.976838    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:27.976848    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:27.990288    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:27.990303    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:28.003976    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:28.003990    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:28.022147    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:28.022158    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:28.045570    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:28.045579    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:28.049698    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:28.049707    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:28.060680    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:28.060691    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:28.073965    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:28.073975    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:28.085360    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:28.085371    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:28.096670    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:28.096683    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:28.108851    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:28.108865    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:28.128032    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:28.128045    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:28.143707    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:28.143718    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:28.155956    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:28.155969    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:28.196443    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:28.196455    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:28.232863    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:28.232875    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:30.747415    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:35.749679    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:35.749859    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:35.762161    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:35.762257    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:35.772775    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:35.772861    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:35.783131    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:35.783227    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:35.800860    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:35.800943    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:35.811286    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:35.811354    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:35.822030    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:35.822108    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:35.836689    4929 logs.go:276] 0 containers: []
	W0930 04:07:35.836701    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:35.836767    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:35.847757    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:35.847774    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:35.847779    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:35.865324    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:35.865335    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:35.882536    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:35.882545    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:35.895670    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:35.895684    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:35.907572    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:35.907582    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:35.947654    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:35.947661    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:35.967840    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:35.967852    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:35.982135    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:35.982148    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:35.993068    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:35.993081    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:36.006273    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:36.006286    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:36.018082    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:36.018094    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:36.022738    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:36.022747    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:36.042161    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:36.042170    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:36.053896    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:36.053908    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:36.066218    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:36.066226    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:36.089341    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:36.089348    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:36.125826    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:36.125841    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:38.639372    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:43.641921    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:43.642196    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:43.672690    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:43.672799    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:43.688203    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:43.688296    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:43.700454    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:43.700537    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:43.720261    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:43.720345    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:43.730860    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:43.730948    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:43.744536    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:43.744610    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:43.754133    4929 logs.go:276] 0 containers: []
	W0930 04:07:43.754144    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:43.754214    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:43.764992    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:43.765008    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:43.765013    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:43.807193    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:43.807207    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:43.825831    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:43.825843    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:43.848834    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:43.848852    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:43.863940    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:43.863952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:43.876576    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:43.876587    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:43.894737    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:43.894749    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:43.919458    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:43.919468    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:43.923913    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:43.923921    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:43.958747    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:43.958761    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:43.973815    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:43.973826    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:43.994462    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:43.994473    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:44.009953    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:44.009968    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:44.022721    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:44.022734    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:44.034575    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:44.034590    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:44.047047    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:44.047058    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:44.059285    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:44.059297    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:46.574688    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:51.576052    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:51.576179    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:51.591095    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:51.591185    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:51.610514    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:51.610599    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:51.622084    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:51.622164    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:51.632982    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:51.633064    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:51.643553    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:51.643630    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:51.654549    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:51.654626    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:51.670329    4929 logs.go:276] 0 containers: []
	W0930 04:07:51.670342    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:51.670430    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:51.683119    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:51.683142    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:51.683149    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:51.687862    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:51.687874    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:51.724420    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:51.724436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:51.738520    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:51.738533    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:51.750131    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:51.750147    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:51.775317    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:51.775331    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:51.793319    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:51.793335    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:51.806082    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:51.806096    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:51.822775    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:51.822788    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:51.847711    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:51.847727    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:51.866405    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:51.866413    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:51.880165    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:51.880176    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:51.923221    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:51.923244    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:51.937867    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:51.937877    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:51.951817    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:51.951828    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:51.968940    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:51.968950    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:51.981136    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:51.981148    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:54.494590    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:59.496405    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:59.496517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:59.509515    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:59.509601    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:59.520167    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:59.520258    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:59.536736    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:59.536825    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:59.553775    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:59.553862    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:59.564357    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:59.564438    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:59.581392    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:59.581474    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:59.592084    4929 logs.go:276] 0 containers: []
	W0930 04:07:59.592099    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:59.592177    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:59.602868    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:59.602886    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:59.602891    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:59.641787    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:59.641804    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:59.670301    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:59.670317    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:59.692565    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:59.692577    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:59.716092    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:59.716102    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:59.727656    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:59.727666    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:59.731780    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:59.731788    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:59.748670    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:59.748680    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:59.764811    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:59.764820    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:59.776502    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:59.776517    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:59.787756    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:59.787767    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:59.806663    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:59.806672    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:59.822440    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:59.822448    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:59.839408    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:59.839422    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:59.852047    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:59.852055    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:59.886520    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:59.886530    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:59.897744    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:59.897755    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:02.414066    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:07.416382    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:07.416682    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:07.436957    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:07.437078    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:07.451595    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:07.451688    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:07.463651    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:07.463737    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:07.474582    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:07.474662    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:07.484857    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:07.484941    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:07.495090    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:07.495168    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:07.505201    4929 logs.go:276] 0 containers: []
	W0930 04:08:07.505211    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:07.505284    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:07.515843    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:07.515860    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:07.515865    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:07.556559    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:07.556570    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:07.574850    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:07.574861    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:07.592161    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:07.592172    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:07.606123    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:07.606135    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:07.618352    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:07.618361    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:07.632518    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:07.632529    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:07.643502    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:07.643513    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:07.657821    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:07.657834    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:07.683254    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:07.683267    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:07.695181    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:07.695195    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:07.712660    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:07.712675    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:07.753339    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:07.753346    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:07.757660    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:07.757666    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:07.775745    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:07.775755    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:07.787751    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:07.787760    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:07.807195    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:07.807204    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:10.320858    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:15.323030    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:15.323228    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:15.339824    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:15.339915    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:15.352795    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:15.352884    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:15.364039    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:15.364131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:15.374497    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:15.374578    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:15.385093    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:15.385174    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:15.395553    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:15.395629    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:15.406096    4929 logs.go:276] 0 containers: []
	W0930 04:08:15.406110    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:15.406182    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:15.416767    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:15.416786    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:15.416791    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:15.455872    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:15.455878    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:15.491602    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:15.491617    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:15.514301    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:15.514316    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:15.528938    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:15.528952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:15.540640    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:15.540650    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:15.551522    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:15.551532    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:15.565900    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:15.565911    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:15.580398    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:15.580407    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:15.596493    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:15.596504    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:15.614664    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:15.614674    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:15.632834    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:15.632849    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:15.656392    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:15.656402    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:15.668982    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:15.668996    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:15.673558    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:15.673564    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:15.701204    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:15.701214    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:15.713036    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:15.713050    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:18.231056    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:23.233177    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:23.233369    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:23.246792    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:23.246887    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:23.258147    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:23.258234    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:23.268428    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:23.268511    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:23.279145    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:23.279231    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:23.289796    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:23.289884    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:23.303421    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:23.303507    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:23.314048    4929 logs.go:276] 0 containers: []
	W0930 04:08:23.314060    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:23.314131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:23.324703    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:23.324720    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:23.324725    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:23.365874    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:23.365886    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:23.382991    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:23.383002    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:23.394601    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:23.394611    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:23.412126    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:23.412136    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:23.423858    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:23.423868    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:23.459405    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:23.459418    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:23.471089    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:23.471099    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:23.485603    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:23.485612    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:23.497064    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:23.497075    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:23.509755    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:23.509766    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:23.514429    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:23.514439    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:23.533403    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:23.533412    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:23.547204    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:23.547215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:23.561508    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:23.561518    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:23.575319    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:23.575333    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:23.586853    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:23.586865    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:26.111321    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:31.113673    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:31.113961    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:31.134908    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:31.135025    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:31.150406    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:31.150499    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:31.165661    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:31.165745    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:31.180946    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:31.181041    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:31.191016    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:31.191105    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:31.201635    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:31.201716    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:31.212201    4929 logs.go:276] 0 containers: []
	W0930 04:08:31.212211    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:31.212277    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:31.226859    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:31.226878    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:31.226883    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:31.268209    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:31.268221    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:31.303843    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:31.303856    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:31.315732    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:31.315744    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:31.327404    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:31.327414    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:31.338891    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:31.338906    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:31.358250    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:31.358261    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:31.376365    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:31.376376    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:31.393948    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:31.393959    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:31.406130    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:31.406142    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:31.410918    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:31.410927    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:31.428435    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:31.428445    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:31.451146    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:31.451154    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:31.465506    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:31.465515    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:31.479065    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:31.479075    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:31.490377    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:31.490390    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:31.504498    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:31.504508    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:34.018563    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:39.020900    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:39.020989    4929 kubeadm.go:597] duration metric: took 4m4.37681725s to restartPrimaryControlPlane
	W0930 04:08:39.021048    4929 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 04:08:39.021078    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0930 04:08:39.970777    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 04:08:39.976002    4929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:08:39.978875    4929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:08:39.981696    4929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:08:39.981702    4929 kubeadm.go:157] found existing configuration files:
	
	I0930 04:08:39.981726    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0930 04:08:39.984201    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:08:39.984232    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:08:39.986856    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0930 04:08:39.989681    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:08:39.989708    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:08:39.992279    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0930 04:08:39.994832    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:08:39.994859    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:08:39.998020    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0930 04:08:40.000779    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:08:40.000807    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:08:40.003187    4929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 04:08:40.019667    4929 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0930 04:08:40.019762    4929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 04:08:40.066485    4929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 04:08:40.066546    4929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 04:08:40.066662    4929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 04:08:40.122038    4929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 04:08:40.127915    4929 out.go:235]   - Generating certificates and keys ...
	I0930 04:08:40.127957    4929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 04:08:40.127987    4929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 04:08:40.128030    4929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 04:08:40.128060    4929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 04:08:40.128095    4929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 04:08:40.128124    4929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 04:08:40.128179    4929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 04:08:40.128219    4929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 04:08:40.128268    4929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 04:08:40.128316    4929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 04:08:40.128336    4929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 04:08:40.128366    4929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 04:08:40.357397    4929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 04:08:40.470907    4929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 04:08:40.509979    4929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 04:08:40.679992    4929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 04:08:40.710945    4929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 04:08:40.711357    4929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 04:08:40.711466    4929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 04:08:40.796420    4929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 04:08:40.800599    4929 out.go:235]   - Booting up control plane ...
	I0930 04:08:40.800652    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 04:08:40.800696    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 04:08:40.800726    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 04:08:40.800763    4929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 04:08:40.800866    4929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 04:08:45.304306    4929 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503021 seconds
	I0930 04:08:45.304480    4929 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 04:08:45.311147    4929 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 04:08:45.820090    4929 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 04:08:45.820180    4929 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-520000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 04:08:46.324912    4929 kubeadm.go:310] [bootstrap-token] Using token: 7c3uuf.lkibzyvgf4w6zyq5
	I0930 04:08:46.329077    4929 out.go:235]   - Configuring RBAC rules ...
	I0930 04:08:46.329138    4929 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 04:08:46.329184    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 04:08:46.336052    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 04:08:46.337030    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 04:08:46.337854    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 04:08:46.338654    4929 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 04:08:46.342865    4929 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 04:08:46.488240    4929 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 04:08:46.729119    4929 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 04:08:46.729578    4929 kubeadm.go:310] 
	I0930 04:08:46.729614    4929 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 04:08:46.729621    4929 kubeadm.go:310] 
	I0930 04:08:46.729680    4929 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 04:08:46.729688    4929 kubeadm.go:310] 
	I0930 04:08:46.729704    4929 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 04:08:46.729736    4929 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 04:08:46.729768    4929 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 04:08:46.729776    4929 kubeadm.go:310] 
	I0930 04:08:46.729804    4929 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 04:08:46.729809    4929 kubeadm.go:310] 
	I0930 04:08:46.729829    4929 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 04:08:46.729832    4929 kubeadm.go:310] 
	I0930 04:08:46.729858    4929 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 04:08:46.729892    4929 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 04:08:46.729925    4929 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 04:08:46.729928    4929 kubeadm.go:310] 
	I0930 04:08:46.729979    4929 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 04:08:46.730016    4929 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 04:08:46.730020    4929 kubeadm.go:310] 
	I0930 04:08:46.730064    4929 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7c3uuf.lkibzyvgf4w6zyq5 \
	I0930 04:08:46.730113    4929 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d \
	I0930 04:08:46.730130    4929 kubeadm.go:310] 	--control-plane 
	I0930 04:08:46.730135    4929 kubeadm.go:310] 
	I0930 04:08:46.730213    4929 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 04:08:46.730217    4929 kubeadm.go:310] 
	I0930 04:08:46.730304    4929 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7c3uuf.lkibzyvgf4w6zyq5 \
	I0930 04:08:46.730359    4929 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d 
	I0930 04:08:46.730412    4929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 04:08:46.730418    4929 cni.go:84] Creating CNI manager for ""
	I0930 04:08:46.730428    4929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:08:46.734199    4929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 04:08:46.741407    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 04:08:46.744275    4929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 04:08:46.748905    4929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 04:08:46.748953    4929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 04:08:46.748979    4929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-520000 minikube.k8s.io/updated_at=2024_09_30T04_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=running-upgrade-520000 minikube.k8s.io/primary=true
	I0930 04:08:46.788572    4929 kubeadm.go:1113] duration metric: took 39.660041ms to wait for elevateKubeSystemPrivileges
	I0930 04:08:46.788613    4929 ops.go:34] apiserver oom_adj: -16
	I0930 04:08:46.788618    4929 kubeadm.go:394] duration metric: took 4m12.158657917s to StartCluster
	I0930 04:08:46.788629    4929 settings.go:142] acquiring lock: {Name:mk8d331f80592adde11c8565cba0670e3b2db485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:08:46.788714    4929 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:08:46.789066    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:08:46.789259    4929 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:08:46.789304    4929 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 04:08:46.789368    4929 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-520000"
	I0930 04:08:46.789377    4929 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-520000"
	W0930 04:08:46.789380    4929 addons.go:243] addon storage-provisioner should already be in state true
	I0930 04:08:46.789382    4929 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-520000"
	I0930 04:08:46.789390    4929 host.go:66] Checking if "running-upgrade-520000" exists ...
	I0930 04:08:46.789394    4929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-520000"
	I0930 04:08:46.789590    4929 config.go:182] Loaded profile config "running-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:08:46.789695    4929 retry.go:31] will retry after 1.47803988s: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/monitor: connect: connection refused
	I0930 04:08:46.792383    4929 out.go:177] * Verifying Kubernetes components...
	I0930 04:08:46.800308    4929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:08:46.804221    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:08:46.808355    4929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:08:46.808363    4929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 04:08:46.808370    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:08:46.904030    4929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:08:46.909084    4929 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:08:46.909134    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:08:46.913246    4929 api_server.go:72] duration metric: took 123.974792ms to wait for apiserver process to appear ...
	I0930 04:08:46.913257    4929 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:08:46.913266    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:46.962163    4929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:08:48.270746    4929 kapi.go:59] client config for running-upgrade-520000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:08:48.270890    4929 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-520000"
	W0930 04:08:48.270896    4929 addons.go:243] addon default-storageclass should already be in state true
	I0930 04:08:48.270909    4929 host.go:66] Checking if "running-upgrade-520000" exists ...
	I0930 04:08:48.271563    4929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 04:08:48.271571    4929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 04:08:48.271577    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:08:48.308094    4929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 04:08:48.359532    4929 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 04:08:48.359546    4929 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 04:08:51.915255    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:51.915284    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:56.915448    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:56.915474    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:01.915750    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:01.915808    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:06.916289    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:06.916335    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:11.917013    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:11.917052    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:16.917717    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:16.917737    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0930 04:09:18.361469    4929 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0930 04:09:18.365278    4929 out.go:177] * Enabled addons: storage-provisioner
	I0930 04:09:18.377047    4929 addons.go:510] duration metric: took 31.588195s for enable addons: enabled=[storage-provisioner]
	I0930 04:09:21.919024    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:21.919085    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:26.920540    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:26.920581    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:31.922442    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:31.922485    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:36.924681    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:36.924702    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:41.925341    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:41.925387    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:46.926833    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:46.926991    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:46.940368    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:09:46.940450    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:46.951161    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:09:46.951255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:46.961634    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:09:46.961718    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:46.972318    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:09:46.972397    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:46.983095    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:09:46.983179    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:46.993438    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:09:46.993517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:47.003957    4929 logs.go:276] 0 containers: []
	W0930 04:09:47.003967    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:47.004040    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:47.014427    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:09:47.014442    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:09:47.014448    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:47.030154    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:47.030170    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:47.065698    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:47.065709    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:47.102141    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:09:47.102157    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:09:47.120354    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:09:47.120368    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:09:47.134436    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:09:47.134445    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:09:47.146024    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:09:47.146034    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:09:47.165353    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:09:47.165364    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:09:47.176359    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:47.176368    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:47.180697    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:09:47.180706    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:09:47.192308    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:09:47.192317    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:09:47.204057    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:09:47.204066    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:09:47.221856    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:47.221867    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:49.747343    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:54.746910    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:54.747433    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:54.779354    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:09:54.779513    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:54.797665    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:09:54.797785    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:54.812005    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:09:54.812092    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:54.824179    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:09:54.824252    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:54.834964    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:09:54.835047    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:54.845250    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:09:54.845333    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:54.855877    4929 logs.go:276] 0 containers: []
	W0930 04:09:54.855889    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:54.855965    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:54.867897    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:09:54.867913    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:09:54.867918    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:09:54.883088    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:09:54.883103    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:09:54.899389    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:09:54.899404    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:09:54.911788    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:09:54.911798    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:09:54.930001    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:54.930011    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:54.954840    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:54.954847    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:54.989089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:54.989105    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:55.023904    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:09:55.023916    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:09:55.038268    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:09:55.038278    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:55.051150    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:09:55.051165    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:09:55.064065    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:55.064079    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:55.068598    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:09:55.068605    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:09:55.080181    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:09:55.080190    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:09:57.592368    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:02.593051    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:02.593269    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:02.604683    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:02.604777    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:02.615302    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:02.615383    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:02.626566    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:02.626654    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:02.637190    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:02.637263    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:02.647500    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:02.647583    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:02.658355    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:02.658442    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:02.668023    4929 logs.go:276] 0 containers: []
	W0930 04:10:02.668042    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:02.668116    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:02.678268    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:02.678283    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:02.678288    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:02.683542    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:02.683551    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:02.696906    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:02.696919    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:02.712089    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:02.712099    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:02.724277    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:02.724287    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:02.741994    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:02.742004    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:02.776453    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:02.776463    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:02.813189    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:02.813202    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:02.830936    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:02.830947    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:02.844665    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:02.844674    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:02.864970    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:02.864985    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:02.877017    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:02.877027    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:02.900392    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:02.900401    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:05.412034    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:10.412970    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:10.413138    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:10.426952    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:10.427049    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:10.438774    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:10.438856    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:10.450077    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:10.450165    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:10.468942    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:10.469024    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:10.479188    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:10.479271    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:10.489532    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:10.489609    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:10.499758    4929 logs.go:276] 0 containers: []
	W0930 04:10:10.499786    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:10.499870    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:10.510748    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:10.510764    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:10.510772    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:10.522273    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:10.522287    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:10.539396    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:10.539410    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:10.550849    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:10.550859    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:10.566236    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:10.566247    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:10.602149    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:10.602156    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:10.639015    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:10.639027    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:10.662091    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:10.662102    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:10.674493    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:10.674509    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:10.693122    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:10.693134    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:10.741068    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:10.741080    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:10.746428    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:10.746435    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:10.760748    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:10.760759    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:13.276713    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:18.278292    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:18.278426    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:18.291884    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:18.291979    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:18.303440    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:18.303520    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:18.313882    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:18.313964    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:18.326120    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:18.326210    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:18.336965    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:18.337058    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:18.347782    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:18.347867    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:18.358144    4929 logs.go:276] 0 containers: []
	W0930 04:10:18.358155    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:18.358225    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:18.369066    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:18.369083    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:18.369089    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:18.373990    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:18.374002    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:18.389915    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:18.389931    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:18.410400    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:18.410414    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:18.422088    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:18.422101    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:18.433308    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:18.433318    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:18.466462    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:18.466470    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:18.489109    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:18.489118    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:18.507109    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:18.507120    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:18.518553    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:18.518565    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:18.536105    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:18.536114    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:18.559646    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:18.559659    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:18.571300    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:18.571313    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:21.108740    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:26.110754    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:26.111254    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:26.154900    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:26.155068    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:26.174839    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:26.174954    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:26.190112    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:26.190211    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:26.203704    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:26.203784    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:26.219117    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:26.219211    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:26.230273    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:26.230353    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:26.241181    4929 logs.go:276] 0 containers: []
	W0930 04:10:26.241193    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:26.241267    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:26.251665    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:26.251679    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:26.251684    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:26.264253    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:26.264264    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:26.282417    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:26.282434    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:26.294102    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:26.294117    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:26.298362    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:26.298369    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:26.333338    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:26.333353    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:26.347631    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:26.347644    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:26.359281    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:26.359297    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:26.374513    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:26.374523    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:26.385954    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:26.385968    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:26.410506    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:26.410515    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:26.445471    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:26.445479    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:26.460023    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:26.460038    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:28.973882    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:33.975913    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:33.976261    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:34.004720    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:34.004874    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:34.022637    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:34.022737    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:34.042262    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:34.042349    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:34.053686    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:34.053770    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:34.064378    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:34.064460    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:34.074435    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:34.074512    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:34.083890    4929 logs.go:276] 0 containers: []
	W0930 04:10:34.083903    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:34.083971    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:34.098792    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:34.098809    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:34.098814    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:34.113725    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:34.113735    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:34.128281    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:34.128293    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:34.160523    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:34.160534    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:34.195435    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:34.195442    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:34.200321    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:34.200327    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:34.214072    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:34.214081    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:34.253621    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:34.253632    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:34.266227    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:34.266237    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:34.282128    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:34.282142    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:34.294387    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:34.294401    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:34.329783    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:34.329794    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:34.342152    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:34.342168    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:36.869093    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:41.871177    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:41.871316    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:41.882678    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:41.882775    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:41.893366    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:41.893449    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:41.903556    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:41.903638    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:41.914184    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:41.914262    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:41.924896    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:41.924974    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:41.935464    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:41.935542    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:41.947840    4929 logs.go:276] 0 containers: []
	W0930 04:10:41.947852    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:41.947924    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:41.965291    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:41.965305    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:41.965310    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:41.981022    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:41.981032    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:42.002950    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:42.002960    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:42.028348    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:42.028355    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:42.061089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:42.061096    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:42.097920    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:42.097929    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:42.110146    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:42.110161    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:42.122019    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:42.122029    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:42.140443    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:42.140454    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:42.153394    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:42.153405    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:42.165073    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:42.165084    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:42.169491    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:42.169499    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:42.183692    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:42.183703    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:44.699620    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:49.701993    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:49.702410    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:49.733790    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:49.733940    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:49.752890    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:49.753004    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:49.766610    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:49.766712    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:49.778746    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:49.778834    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:49.789992    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:49.790072    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:49.800705    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:49.800786    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:49.810922    4929 logs.go:276] 0 containers: []
	W0930 04:10:49.810935    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:49.811007    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:49.824032    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:49.824049    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:49.824055    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:49.859229    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:49.859240    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:49.863943    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:49.863951    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:49.899125    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:49.899137    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:49.917548    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:49.917561    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:49.934432    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:49.934445    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:49.959249    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:49.959257    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:49.970602    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:49.970614    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:49.985476    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:49.985486    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:49.999550    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:49.999561    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:50.011324    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:50.011340    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:50.029960    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:50.029970    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:50.044900    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:50.044910    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:52.564973    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:57.567126    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:57.567343    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:57.582959    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:57.583065    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:57.595120    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:57.595236    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:57.606537    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:57.606618    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:57.617044    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:57.617127    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:57.627673    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:57.627754    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:57.639142    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:57.639229    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:57.649289    4929 logs.go:276] 0 containers: []
	W0930 04:10:57.649301    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:57.649369    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:57.659662    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:57.659677    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:57.659682    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:57.674429    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:57.674443    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:57.687000    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:57.687011    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:57.698864    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:57.698875    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:57.718727    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:57.718738    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:57.744741    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:57.744755    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:57.773650    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:57.773665    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:57.808634    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:57.808646    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:57.816023    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:57.816036    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:57.908421    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:57.908436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:57.923141    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:57.923151    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:57.934945    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:57.934957    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:57.950231    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:57.950242    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:00.486490    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:05.488668    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:05.488956    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:05.510648    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:05.510765    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:05.527070    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:05.527173    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:05.539733    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:05.539827    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:05.550695    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:05.550776    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:05.561147    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:05.561221    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:05.571604    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:05.571682    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:05.582159    4929 logs.go:276] 0 containers: []
	W0930 04:11:05.582172    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:05.582236    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:05.593544    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:05.593574    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:05.593579    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:05.607790    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:05.607800    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:05.619048    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:05.619058    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:05.630766    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:05.630777    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:05.656335    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:05.656346    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:05.691289    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:05.691297    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:05.703444    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:05.703457    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:05.715552    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:05.715565    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:05.750344    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:05.750356    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:05.766032    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:05.766043    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:05.778875    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:05.778886    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:05.796403    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:05.796413    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:05.810910    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:05.810920    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:05.822728    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:05.822737    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:05.834494    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:05.834505    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:08.341203    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:13.343439    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:13.343718    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:13.364984    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:13.365145    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:13.380663    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:13.380756    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:13.392872    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:13.392961    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:13.403021    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:13.403093    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:13.415582    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:13.415661    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:13.425926    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:13.426004    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:13.435863    4929 logs.go:276] 0 containers: []
	W0930 04:11:13.435874    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:13.435946    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:13.448374    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:13.448392    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:13.448398    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:13.462409    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:13.462420    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:13.474169    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:13.474179    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:13.498254    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:13.498265    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:13.503015    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:13.503021    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:13.515031    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:13.515041    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:13.549840    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:13.549855    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:13.586136    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:13.586146    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:13.603951    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:13.603962    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:13.615800    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:13.615811    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:13.629719    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:13.629731    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:13.641902    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:13.641913    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:13.654244    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:13.654257    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:13.669508    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:13.669518    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:13.680603    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:13.680612    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:16.194197    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:21.196502    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:21.197255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:21.220819    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:21.220953    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:21.236763    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:21.236892    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:21.254531    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:21.254616    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:21.265515    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:21.265594    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:21.276083    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:21.276163    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:21.287535    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:21.287623    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:21.298056    4929 logs.go:276] 0 containers: []
	W0930 04:11:21.298068    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:21.298142    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:21.309240    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:21.309258    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:21.309264    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:21.344333    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:21.344342    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:21.358959    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:21.358969    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:21.369941    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:21.370098    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:21.382346    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:21.382360    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:21.424195    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:21.424210    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:21.438226    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:21.438236    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:21.450137    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:21.450147    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:21.463411    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:21.463423    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:21.480953    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:21.480963    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:21.485648    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:21.485654    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:21.500572    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:21.500583    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:21.513652    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:21.513665    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:21.525469    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:21.525483    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:21.540776    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:21.540786    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:24.066039    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:29.068317    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:29.068632    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:29.093525    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:29.093656    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:29.112136    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:29.112239    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:29.124746    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:29.124829    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:29.135681    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:29.135764    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:29.147169    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:29.147255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:29.158062    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:29.158145    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:29.172925    4929 logs.go:276] 0 containers: []
	W0930 04:11:29.172938    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:29.173012    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:29.183636    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:29.183652    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:29.183657    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:29.188113    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:29.188121    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:29.200089    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:29.200098    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:29.211889    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:29.211899    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:29.228554    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:29.228565    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:29.242646    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:29.242656    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:29.264029    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:29.264039    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:29.275470    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:29.275483    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:29.301731    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:29.301748    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:29.335382    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:29.335391    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:29.372114    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:29.372125    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:29.384465    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:29.384476    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:29.399495    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:29.399511    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:29.411727    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:29.411739    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:29.424594    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:29.424604    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:31.944440    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:36.946648    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:36.946857    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:36.962884    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:36.962993    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:36.975427    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:36.975509    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:36.985966    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:36.986050    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:36.996400    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:36.996487    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:37.006766    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:37.006851    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:37.020061    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:37.020151    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:37.030349    4929 logs.go:276] 0 containers: []
	W0930 04:11:37.030359    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:37.030424    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:37.040742    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:37.040759    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:37.040764    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:37.052193    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:37.052203    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:37.067194    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:37.067204    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:37.102201    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:37.102215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:37.113813    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:37.113822    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:37.131544    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:37.131553    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:37.157123    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:37.157133    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:37.191771    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:37.191779    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:37.208948    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:37.208958    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:37.220324    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:37.220334    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:37.232170    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:37.232184    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:37.244098    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:37.244113    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:37.248477    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:37.248485    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:37.266286    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:37.266299    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:37.278723    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:37.278734    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:39.792355    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:44.794482    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:44.794657    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:44.806658    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:44.806744    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:44.817738    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:44.817827    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:44.828435    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:44.828519    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:44.839435    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:44.839516    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:44.850074    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:44.850156    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:44.860944    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:44.861018    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:44.871087    4929 logs.go:276] 0 containers: []
	W0930 04:11:44.871098    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:44.871172    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:44.882306    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:44.882322    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:44.882327    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:44.898807    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:44.898822    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:44.910655    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:44.910667    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:44.922514    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:44.922524    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:44.927149    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:44.927155    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:44.938809    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:44.938821    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:44.973834    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:44.973850    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:44.989443    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:44.989458    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:45.005051    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:45.005064    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:45.030207    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:45.030215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:45.041389    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:45.041399    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:45.053820    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:45.053832    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:45.087371    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:45.087379    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:45.101045    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:45.101059    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:45.115833    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:45.115844    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:47.634724    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:52.635576    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:52.635879    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:52.661387    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:52.661536    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:52.677402    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:52.677507    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:52.690146    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:52.690234    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:52.708497    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:52.708582    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:52.725578    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:52.725661    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:52.736307    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:52.736393    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:52.747031    4929 logs.go:276] 0 containers: []
	W0930 04:11:52.747042    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:52.747113    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:52.757077    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:52.757096    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:52.757102    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:52.769019    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:52.769029    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:52.787510    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:52.787521    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:52.799370    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:52.799383    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:52.811380    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:52.811390    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:52.826543    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:52.826554    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:52.838065    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:52.838076    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:52.850318    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:52.850331    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:52.863171    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:52.863183    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:52.875084    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:52.875100    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:52.909938    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:52.909952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:52.927599    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:52.927613    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:52.941660    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:52.941674    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:52.946669    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:52.946677    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:52.985331    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:52.985345    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:55.511312    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:00.513541    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:00.513876    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:00.541654    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:00.541809    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:00.561188    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:00.561290    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:00.574934    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:00.575033    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:00.586088    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:00.586167    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:00.599643    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:00.599726    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:00.609925    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:00.610017    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:00.620147    4929 logs.go:276] 0 containers: []
	W0930 04:12:00.620159    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:00.620240    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:00.630648    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:00.630665    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:00.630671    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:00.670169    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:00.670180    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:00.682386    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:00.682398    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:00.700575    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:00.700585    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:00.712523    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:00.712534    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:00.725315    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:00.725324    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:00.737007    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:00.737022    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:00.749116    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:00.749128    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:00.762067    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:00.762082    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:00.787811    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:00.787827    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:00.804269    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:00.804281    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:00.841081    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:00.841096    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:00.846264    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:00.846277    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:00.862164    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:00.862178    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:00.881337    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:00.881350    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:03.403636    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:08.404632    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:08.404818    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:08.419138    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:08.419222    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:08.429968    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:08.430053    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:08.443492    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:08.443574    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:08.453820    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:08.453905    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:08.464732    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:08.464826    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:08.475683    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:08.475771    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:08.487466    4929 logs.go:276] 0 containers: []
	W0930 04:12:08.487477    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:08.487548    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:08.497742    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:08.497762    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:08.497768    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:08.503599    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:08.503610    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:08.515644    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:08.515658    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:08.527683    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:08.527693    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:08.546984    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:08.546996    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:08.562673    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:08.562683    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:08.586080    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:08.586088    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:08.619089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:08.619098    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:08.654309    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:08.654320    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:08.668330    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:08.668345    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:08.686318    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:08.686329    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:08.701255    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:08.701270    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:08.715468    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:08.715483    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:08.727413    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:08.727427    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:08.740671    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:08.740681    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:11.254346    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:16.255641    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:16.255807    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:16.268121    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:16.268210    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:16.279551    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:16.279638    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:16.294482    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:16.294569    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:16.305360    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:16.305442    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:16.315873    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:16.315960    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:16.330146    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:16.330226    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:16.341011    4929 logs.go:276] 0 containers: []
	W0930 04:12:16.341024    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:16.341101    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:16.352818    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:16.352836    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:16.352842    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:16.368262    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:16.368272    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:16.384053    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:16.384066    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:16.395264    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:16.395275    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:16.406846    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:16.406859    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:16.425546    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:16.425557    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:16.461580    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:16.461594    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:16.476412    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:16.476422    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:16.480968    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:16.480977    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:16.515686    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:16.515696    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:16.528009    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:16.528022    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:16.552542    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:16.552554    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:16.564752    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:16.564763    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:16.576716    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:16.576727    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:16.592327    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:16.592336    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:19.106172    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:24.108430    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:24.108649    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:24.129034    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:24.129128    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:24.147035    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:24.147122    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:24.158204    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:24.158288    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:24.168817    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:24.168893    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:24.180407    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:24.180493    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:24.191073    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:24.191152    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:24.201362    4929 logs.go:276] 0 containers: []
	W0930 04:12:24.201374    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:24.201445    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:24.211908    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:24.211928    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:24.211934    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:24.223576    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:24.223588    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:24.237201    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:24.237212    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:24.249541    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:24.249552    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:24.265335    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:24.265347    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:24.279498    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:24.279509    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:24.303210    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:24.303220    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:24.336787    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:24.336796    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:24.350969    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:24.350983    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:24.370981    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:24.370993    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:24.382626    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:24.382639    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:24.399216    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:24.399229    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:24.410794    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:24.410807    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:24.428453    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:24.428464    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:24.433450    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:24.433457    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:26.970726    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:31.971096    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:31.971284    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:31.984027    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:31.984128    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:31.994411    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:31.994497    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:32.006122    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:32.006213    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:32.018228    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:32.018308    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:32.028744    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:32.028824    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:32.039306    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:32.039390    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:32.049487    4929 logs.go:276] 0 containers: []
	W0930 04:12:32.049503    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:32.049577    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:32.059781    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:32.059796    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:32.059802    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:32.071613    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:32.071622    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:32.087982    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:32.087996    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:32.106921    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:32.106931    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:32.111395    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:32.111402    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:32.122726    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:32.122736    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:32.146069    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:32.146079    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:32.157457    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:32.157467    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:32.171556    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:32.171566    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:32.206153    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:32.206169    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:32.221447    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:32.221459    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:32.233684    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:32.233693    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:32.245955    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:32.245965    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:32.257587    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:32.257596    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:32.269431    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:32.269442    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:34.805679    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:39.807852    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:39.808131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:39.827402    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:39.827523    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:39.841336    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:39.841431    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:39.853419    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:39.853521    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:39.864079    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:39.864159    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:39.874892    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:39.874978    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:39.885671    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:39.885754    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:39.896517    4929 logs.go:276] 0 containers: []
	W0930 04:12:39.896528    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:39.896600    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:39.906781    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:39.906797    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:39.906802    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:39.918274    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:39.918284    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:39.942165    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:39.942174    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:39.978377    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:39.978387    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:39.990606    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:39.990617    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:40.002501    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:40.002512    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:40.016532    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:40.016544    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:40.031037    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:40.031047    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:40.047423    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:40.047436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:40.059039    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:40.059050    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:40.093364    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:40.093376    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:40.105123    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:40.105139    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:40.116813    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:40.116825    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:40.133703    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:40.133712    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:40.147810    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:40.147820    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:42.654067    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:47.656185    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:47.661527    4929 out.go:201] 
	W0930 04:12:47.665338    4929 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0930 04:12:47.665344    4929 out.go:270] * 
	* 
	W0930 04:12:47.665818    4929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:12:47.671508    4929 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-520000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-30 04:12:47.770197 -0700 PDT m=+3163.504753876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-520000 -n running-upgrade-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-520000 -n running-upgrade-520000: exit status 2 (15.619494625s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-520000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-910000          | force-systemd-flag-910000 | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-516000              | force-systemd-env-516000  | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-516000           | force-systemd-env-516000  | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT | 30 Sep 24 04:02 PDT |
	| start   | -p docker-flags-602000                | docker-flags-602000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-910000             | force-systemd-flag-910000 | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-910000          | force-systemd-flag-910000 | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT | 30 Sep 24 04:02 PDT |
	| start   | -p cert-expiration-565000             | cert-expiration-565000    | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-602000 ssh               | docker-flags-602000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-602000 ssh               | docker-flags-602000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-602000                | docker-flags-602000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT | 30 Sep 24 04:02 PDT |
	| start   | -p cert-options-474000                | cert-options-474000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-474000 ssh               | cert-options-474000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-474000 -- sudo        | cert-options-474000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-474000                | cert-options-474000       | jenkins | v1.34.0 | 30 Sep 24 04:02 PDT | 30 Sep 24 04:02 PDT |
	| start   | -p running-upgrade-520000             | minikube                  | jenkins | v1.26.0 | 30 Sep 24 04:03 PDT | 30 Sep 24 04:04 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-520000             | running-upgrade-520000    | jenkins | v1.34.0 | 30 Sep 24 04:04 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-565000             | cert-expiration-565000    | jenkins | v1.34.0 | 30 Sep 24 04:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-565000             | cert-expiration-565000    | jenkins | v1.34.0 | 30 Sep 24 04:05 PDT | 30 Sep 24 04:05 PDT |
	| start   | -p kubernetes-upgrade-925000          | kubernetes-upgrade-925000 | jenkins | v1.34.0 | 30 Sep 24 04:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-925000          | kubernetes-upgrade-925000 | jenkins | v1.34.0 | 30 Sep 24 04:06 PDT | 30 Sep 24 04:06 PDT |
	| start   | -p kubernetes-upgrade-925000          | kubernetes-upgrade-925000 | jenkins | v1.34.0 | 30 Sep 24 04:06 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-925000          | kubernetes-upgrade-925000 | jenkins | v1.34.0 | 30 Sep 24 04:06 PDT | 30 Sep 24 04:06 PDT |
	| start   | -p stopped-upgrade-312000             | minikube                  | jenkins | v1.26.0 | 30 Sep 24 04:06 PDT | 30 Sep 24 04:07 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-312000 stop           | minikube                  | jenkins | v1.26.0 | 30 Sep 24 04:07 PDT | 30 Sep 24 04:07 PDT |
	| start   | -p stopped-upgrade-312000             | stopped-upgrade-312000    | jenkins | v1.34.0 | 30 Sep 24 04:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 04:07:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 04:07:20.696322    5073 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:07:20.696490    5073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:07:20.696494    5073 out.go:358] Setting ErrFile to fd 2...
	I0930 04:07:20.696497    5073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:07:20.696668    5073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:07:20.697972    5073 out.go:352] Setting JSON to false
	I0930 04:07:20.717383    5073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4003,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:07:20.717479    5073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:07:20.722442    5073 out.go:177] * [stopped-upgrade-312000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:07:20.728355    5073 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:07:20.728460    5073 notify.go:220] Checking for updates...
	I0930 04:07:20.736293    5073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:07:20.739336    5073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:07:20.742342    5073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:07:20.745411    5073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:07:20.748337    5073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:07:20.751603    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:07:20.755289    5073 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 04:07:20.758291    5073 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:07:20.762344    5073 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:07:20.769341    5073 start.go:297] selected driver: qemu2
	I0930 04:07:20.769347    5073 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:20.769415    5073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:07:20.772211    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:07:20.772244    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:07:20.772270    5073 start.go:340] cluster config:
	{Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:20.772326    5073 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:07:20.783828    5073 out.go:177] * Starting "stopped-upgrade-312000" primary control-plane node in "stopped-upgrade-312000" cluster
	I0930 04:07:20.788356    5073 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:07:20.788372    5073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0930 04:07:20.788384    5073 cache.go:56] Caching tarball of preloaded images
	I0930 04:07:20.788455    5073 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:07:20.788462    5073 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0930 04:07:20.788521    5073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/config.json ...
	I0930 04:07:20.789031    5073 start.go:360] acquireMachinesLock for stopped-upgrade-312000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:07:20.789072    5073 start.go:364] duration metric: took 33.708µs to acquireMachinesLock for "stopped-upgrade-312000"
	I0930 04:07:20.789081    5073 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:07:20.789087    5073 fix.go:54] fixHost starting: 
	I0930 04:07:20.789208    5073 fix.go:112] recreateIfNeeded on stopped-upgrade-312000: state=Stopped err=<nil>
	W0930 04:07:20.789221    5073 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:07:20.797314    5073 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-312000" ...
	I0930 04:07:19.955341    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:19.955475    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:19.970165    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:19.970249    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:19.980986    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:19.981076    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:19.992738    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:19.992811    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:20.003220    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:20.003306    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:20.014110    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:20.014182    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:20.025260    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:20.025342    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:20.035583    4929 logs.go:276] 0 containers: []
	W0930 04:07:20.035596    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:20.035665    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:20.046852    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:20.046871    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:20.046877    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:20.058979    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:20.058989    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:20.083826    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:20.083842    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:20.124902    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:20.124915    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:20.159478    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:20.159489    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:20.180131    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:20.180143    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:20.204289    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:20.204305    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:20.216051    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:20.216061    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:20.230568    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:20.230578    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:20.247164    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:20.247172    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:20.266954    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:20.266965    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:20.283421    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:20.283434    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:20.305010    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:20.305021    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:20.317356    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:20.317369    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:20.329113    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:20.329125    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:20.333312    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:20.333321    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:20.348838    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:20.348848    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:22.862285    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:20.801289    5073 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:07:20.801373    5073 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50456-:22,hostfwd=tcp::50457-:2376,hostname=stopped-upgrade-312000 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/disk.qcow2
	I0930 04:07:20.847913    5073 main.go:141] libmachine: STDOUT: 
	I0930 04:07:20.847942    5073 main.go:141] libmachine: STDERR: 
	I0930 04:07:20.847950    5073 main.go:141] libmachine: Waiting for VM to start (ssh -p 50456 docker@127.0.0.1)...
	I0930 04:07:27.864944    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:27.865150    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:27.880879    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:27.880972    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:27.891911    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:27.891999    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:27.902680    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:27.902760    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:27.913426    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:27.913514    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:27.923766    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:27.923845    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:27.934441    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:27.934517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:27.944546    4929 logs.go:276] 0 containers: []
	W0930 04:07:27.944557    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:27.944627    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:27.959239    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:27.959260    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:27.959265    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:27.976838    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:27.976848    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:27.990288    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:27.990303    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:28.003976    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:28.003990    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:28.022147    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:28.022158    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:28.045570    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:28.045579    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:28.049698    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:28.049707    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:28.060680    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:28.060691    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:28.073965    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:28.073975    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:28.085360    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:28.085371    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:28.096670    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:28.096683    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:28.108851    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:28.108865    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:28.128032    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:28.128045    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:28.143707    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:28.143718    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:28.155956    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:28.155969    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:28.196443    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:28.196455    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:28.232863    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:28.232875    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:30.747415    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:35.749679    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:35.749859    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:35.762161    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:35.762257    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:35.772775    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:35.772861    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:35.783131    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:35.783227    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:35.800860    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:35.800943    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:35.811286    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:35.811354    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:35.822030    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:35.822108    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:35.836689    4929 logs.go:276] 0 containers: []
	W0930 04:07:35.836701    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:35.836767    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:35.847757    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:35.847774    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:35.847779    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:35.865324    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:35.865335    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:35.882536    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:35.882545    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:35.895670    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:35.895684    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:35.907572    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:35.907582    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:35.947654    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:35.947661    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:35.967840    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:35.967852    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:35.982135    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:35.982148    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:35.993068    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:35.993081    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:36.006273    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:36.006286    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:36.018082    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:36.018094    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:36.022738    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:36.022747    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:36.042161    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:36.042170    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:36.053896    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:36.053908    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:36.066218    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:36.066226    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:36.089341    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:36.089348    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:36.125826    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:36.125841    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:38.639372    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:41.096231    5073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/config.json ...
	I0930 04:07:41.097219    5073 machine.go:93] provisionDockerMachine start ...
	I0930 04:07:41.097402    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.097844    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.097858    5073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 04:07:41.197917    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 04:07:41.197954    5073 buildroot.go:166] provisioning hostname "stopped-upgrade-312000"
	I0930 04:07:41.198103    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.198355    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.198367    5073 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-312000 && echo "stopped-upgrade-312000" | sudo tee /etc/hostname
	I0930 04:07:41.286376    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-312000
	
	I0930 04:07:41.286463    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.286627    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.286640    5073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-312000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-312000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-312000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 04:07:41.368395    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 04:07:41.368411    5073 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19734-1406/.minikube CaCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19734-1406/.minikube}
	I0930 04:07:41.368420    5073 buildroot.go:174] setting up certificates
	I0930 04:07:41.368425    5073 provision.go:84] configureAuth start
	I0930 04:07:41.368436    5073 provision.go:143] copyHostCerts
	I0930 04:07:41.368529    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem, removing ...
	I0930 04:07:41.368540    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem
	I0930 04:07:41.368698    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem (1078 bytes)
	I0930 04:07:41.368927    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem, removing ...
	I0930 04:07:41.368931    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem
	I0930 04:07:41.368995    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem (1123 bytes)
	I0930 04:07:41.369137    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem, removing ...
	I0930 04:07:41.369142    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem
	I0930 04:07:41.369200    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem (1675 bytes)
	I0930 04:07:41.369304    5073 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-312000 san=[127.0.0.1 localhost minikube stopped-upgrade-312000]
	I0930 04:07:41.486998    5073 provision.go:177] copyRemoteCerts
	I0930 04:07:41.487051    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 04:07:41.487064    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:41.526214    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 04:07:41.533608    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 04:07:41.541189    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 04:07:41.548431    5073 provision.go:87] duration metric: took 179.993458ms to configureAuth
	I0930 04:07:41.548440    5073 buildroot.go:189] setting minikube options for container-runtime
	I0930 04:07:41.548548    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:07:41.548587    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.548682    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.548687    5073 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 04:07:41.620736    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0930 04:07:41.620747    5073 buildroot.go:70] root file system type: tmpfs
	I0930 04:07:41.620803    5073 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 04:07:41.620864    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.620973    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.621006    5073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 04:07:41.696187    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 04:07:41.696247    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.696354    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.696363    5073 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 04:07:42.070574    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0930 04:07:42.070587    5073 machine.go:96] duration metric: took 973.367625ms to provisionDockerMachine
	I0930 04:07:42.070598    5073 start.go:293] postStartSetup for "stopped-upgrade-312000" (driver="qemu2")
	I0930 04:07:42.070606    5073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 04:07:42.070666    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 04:07:42.070677    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:42.109675    5073 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 04:07:42.110987    5073 info.go:137] Remote host: Buildroot 2021.02.12
	I0930 04:07:42.110996    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/addons for local assets ...
	I0930 04:07:42.111081    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/files for local assets ...
	I0930 04:07:42.111217    5073 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem -> 19292.pem in /etc/ssl/certs
	I0930 04:07:42.111356    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 04:07:42.113966    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:07:42.121166    5073 start.go:296] duration metric: took 50.56325ms for postStartSetup
	I0930 04:07:42.121182    5073 fix.go:56] duration metric: took 21.332401208s for fixHost
	I0930 04:07:42.121220    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:42.121323    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:42.121328    5073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 04:07:42.195167    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694462.585220379
	
	I0930 04:07:42.195178    5073 fix.go:216] guest clock: 1727694462.585220379
	I0930 04:07:42.195182    5073 fix.go:229] Guest: 2024-09-30 04:07:42.585220379 -0700 PDT Remote: 2024-09-30 04:07:42.121183 -0700 PDT m=+21.455551626 (delta=464.037379ms)
	I0930 04:07:42.195198    5073 fix.go:200] guest clock delta is within tolerance: 464.037379ms
	I0930 04:07:42.195202    5073 start.go:83] releasing machines lock for "stopped-upgrade-312000", held for 21.406430709s
	I0930 04:07:42.195276    5073 ssh_runner.go:195] Run: cat /version.json
	I0930 04:07:42.195280    5073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 04:07:42.195288    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:42.195298    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	W0930 04:07:42.195997    5073 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50456: connect: connection refused
	I0930 04:07:42.196012    5073 retry.go:31] will retry after 130.303222ms: dial tcp [::1]:50456: connect: connection refused
	W0930 04:07:42.233383    5073 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0930 04:07:42.233431    5073 ssh_runner.go:195] Run: systemctl --version
	I0930 04:07:42.235099    5073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 04:07:42.236689    5073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 04:07:42.236723    5073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0930 04:07:42.239616    5073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0930 04:07:42.243870    5073 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 04:07:42.243877    5073 start.go:495] detecting cgroup driver to use...
	I0930 04:07:42.243962    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:07:42.250387    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0930 04:07:42.253766    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 04:07:42.257222    5073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 04:07:42.257247    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 04:07:42.260682    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:07:42.263569    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 04:07:42.266405    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:07:42.269738    5073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 04:07:42.273175    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 04:07:42.276497    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 04:07:42.279323    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 04:07:42.282290    5073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 04:07:42.285443    5073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 04:07:42.288592    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:42.372239    5073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 04:07:42.382116    5073 start.go:495] detecting cgroup driver to use...
	I0930 04:07:42.382202    5073 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 04:07:42.388689    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:07:42.393368    5073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 04:07:42.400035    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:07:42.446736    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 04:07:42.452191    5073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 04:07:42.517490    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 04:07:42.523826    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:07:42.529948    5073 ssh_runner.go:195] Run: which cri-dockerd
	I0930 04:07:42.531442    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 04:07:42.534409    5073 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0930 04:07:42.539370    5073 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 04:07:42.617587    5073 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 04:07:42.695821    5073 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 04:07:42.695884    5073 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 04:07:42.701122    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:42.763509    5073 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:07:43.884429    5073 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.120920166s)
	I0930 04:07:43.884501    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 04:07:43.889455    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:07:43.894440    5073 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 04:07:43.979387    5073 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 04:07:44.061450    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:44.151363    5073 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 04:07:44.157003    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:07:44.161497    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:44.241891    5073 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 04:07:44.279835    5073 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 04:07:44.279932    5073 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 04:07:44.281990    5073 start.go:563] Will wait 60s for crictl version
	I0930 04:07:44.282048    5073 ssh_runner.go:195] Run: which crictl
	I0930 04:07:44.283552    5073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 04:07:44.297533    5073 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0930 04:07:44.297617    5073 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:07:44.313906    5073 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:07:44.333904    5073 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0930 04:07:44.334053    5073 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0930 04:07:44.335322    5073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 04:07:44.338877    5073 kubeadm.go:883] updating cluster {Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0930 04:07:44.338931    5073 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:07:44.338982    5073 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:07:44.349415    5073 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:07:44.349426    5073 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:07:44.349487    5073 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:07:44.353047    5073 ssh_runner.go:195] Run: which lz4
	I0930 04:07:44.354349    5073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 04:07:44.355583    5073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 04:07:44.355594    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0930 04:07:45.260515    5073 docker.go:649] duration metric: took 906.219083ms to copy over tarball
	I0930 04:07:45.260586    5073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 04:07:43.641921    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:43.642196    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:43.672690    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:43.672799    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:43.688203    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:43.688296    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:43.700454    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:43.700537    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:43.720261    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:43.720345    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:43.730860    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:43.730948    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:43.744536    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:43.744610    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:43.754133    4929 logs.go:276] 0 containers: []
	W0930 04:07:43.754144    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:43.754214    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:43.764992    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:43.765008    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:43.765013    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:43.807193    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:43.807207    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:43.825831    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:43.825843    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:43.848834    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:43.848852    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:43.863940    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:43.863952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:43.876576    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:43.876587    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:43.894737    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:43.894749    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:43.919458    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:43.919468    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:43.923913    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:43.923921    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:43.958747    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:43.958761    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:43.973815    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:43.973826    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:43.994462    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:43.994473    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:44.009953    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:44.009968    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:44.022721    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:44.022734    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:44.034575    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:44.034590    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:44.047047    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:44.047058    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:44.059285    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:44.059297    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:46.574688    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:46.431632    5073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171048583s)
	I0930 04:07:46.431646    5073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 04:07:46.447546    5073 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:07:46.450503    5073 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0930 04:07:46.455518    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:46.532148    5073 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:07:47.981852    5073 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.449705917s)
	I0930 04:07:47.981971    5073 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:07:47.992608    5073 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:07:47.992616    5073 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:07:47.992622    5073 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 04:07:47.996662    5073 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:47.998644    5073 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:48.000758    5073 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:48.000921    5073 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:48.003024    5073 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:48.003117    5073 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:48.004577    5073 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:48.004780    5073 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:48.005283    5073 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:48.005875    5073 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 04:07:48.007064    5073 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:48.007799    5073 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:48.008298    5073 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:48.008505    5073 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 04:07:48.009429    5073 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:48.010203    5073 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:49.906595    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.934341    5073 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0930 04:07:49.934392    5073 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.934509    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.952937    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0930 04:07:50.011098    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0930 04:07:50.029142    5073 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0930 04:07:50.029166    5073 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0930 04:07:50.029256    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0930 04:07:50.044495    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.044850    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0930 04:07:50.044981    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0930 04:07:50.059309    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0930 04:07:50.059338    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0930 04:07:50.059396    5073 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0930 04:07:50.059415    5073 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.059472    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.067935    5073 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0930 04:07:50.067951    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0930 04:07:50.073064    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0930 04:07:50.087346    5073 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0930 04:07:50.087495    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.106172    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0930 04:07:50.106215    5073 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0930 04:07:50.106232    5073 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.106296    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.116444    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 04:07:50.116585    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:07:50.118084    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0930 04:07:50.118103    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0930 04:07:50.161857    5073 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:07:50.161872    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0930 04:07:50.195590    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0930 04:07:50.343561    5073 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0930 04:07:50.343844    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.369089    5073 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0930 04:07:50.369120    5073 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.369217    5073 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.387344    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 04:07:50.387491    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:07:50.388950    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0930 04:07:50.388962    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0930 04:07:50.418927    5073 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:07:50.418939    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0930 04:07:50.548161    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.552866    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.603455    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660337    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 04:07:50.660376    5073 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0930 04:07:50.660383    5073 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0930 04:07:50.660397    5073 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.660398    5073 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.660417    5073 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0930 04:07:50.660431    5073 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660467    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.660468    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660468    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.685639    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0930 04:07:50.685655    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0930 04:07:50.685867    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0930 04:07:50.685890    5073 cache_images.go:92] duration metric: took 2.693300375s to LoadCachedImages
	W0930 04:07:50.685921    5073 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0930 04:07:50.685927    5073 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0930 04:07:50.685981    5073 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-312000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 04:07:50.686046    5073 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 04:07:51.576052    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:51.576179    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:51.591095    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:51.591185    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:51.610514    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:51.610599    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:51.622084    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:51.622164    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:51.632982    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:51.633064    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:51.643553    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:51.643630    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:51.654549    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:51.654626    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:51.670329    4929 logs.go:276] 0 containers: []
	W0930 04:07:51.670342    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:51.670430    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:51.683119    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:51.683142    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:51.683149    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:51.687862    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:51.687874    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:51.724420    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:51.724436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:51.738520    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:51.738533    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:51.750131    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:51.750147    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:51.775317    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:51.775331    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:51.793319    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:51.793335    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:51.806082    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:51.806096    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:51.822775    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:51.822788    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:51.847711    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:51.847727    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:51.866405    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:51.866413    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:51.880165    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:51.880176    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:51.923221    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:51.923244    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:51.937867    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:51.937877    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:07:51.951817    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:51.951828    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:51.968940    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:51.968950    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:51.981136    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:51.981148    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:50.699093    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:07:50.699109    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:07:50.699122    5073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 04:07:50.699133    5073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-312000 NodeName:stopped-upgrade-312000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 04:07:50.699202    5073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-312000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 04:07:50.699260    5073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0930 04:07:50.701938    5073 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 04:07:50.701973    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 04:07:50.704554    5073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0930 04:07:50.709367    5073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 04:07:50.714063    5073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0930 04:07:50.719054    5073 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0930 04:07:50.720197    5073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 04:07:50.723839    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:50.800548    5073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:07:50.806021    5073 certs.go:68] Setting up /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000 for IP: 10.0.2.15
	I0930 04:07:50.806032    5073 certs.go:194] generating shared ca certs ...
	I0930 04:07:50.806041    5073 certs.go:226] acquiring lock for ca certs: {Name:mkeec9701f93539137211ace80b844b19e48dcd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:50.806213    5073 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key
	I0930 04:07:50.806266    5073 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key
	I0930 04:07:50.806272    5073 certs.go:256] generating profile certs ...
	I0930 04:07:50.806354    5073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key
	I0930 04:07:50.806370    5073 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac
	I0930 04:07:50.806381    5073 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0930 04:07:51.028628    5073 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac ...
	I0930 04:07:51.028646    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac: {Name:mk603770b4713bd35f9a58d5d4f9414c2f89c7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.029000    5073 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac ...
	I0930 04:07:51.029010    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac: {Name:mkf2616396a7a904def419dd7c8e7f7c1e845d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.029158    5073 certs.go:381] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt
	I0930 04:07:51.029325    5073 certs.go:385] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key
	I0930 04:07:51.032144    5073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.key
	I0930 04:07:51.032313    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem (1338 bytes)
	W0930 04:07:51.032343    5073 certs.go:480] ignoring /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929_empty.pem, impossibly tiny 0 bytes
	I0930 04:07:51.032350    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 04:07:51.032371    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem (1078 bytes)
	I0930 04:07:51.032392    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem (1123 bytes)
	I0930 04:07:51.032409    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem (1675 bytes)
	I0930 04:07:51.032453    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:07:51.032837    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 04:07:51.039878    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 04:07:51.047074    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 04:07:51.053559    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 04:07:51.060694    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 04:07:51.067496    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 04:07:51.074021    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 04:07:51.081176    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 04:07:51.088329    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /usr/share/ca-certificates/19292.pem (1708 bytes)
	I0930 04:07:51.095226    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 04:07:51.101817    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem --> /usr/share/ca-certificates/1929.pem (1338 bytes)
	I0930 04:07:51.109275    5073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 04:07:51.115113    5073 ssh_runner.go:195] Run: openssl version
	I0930 04:07:51.117152    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 04:07:51.120722    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.122145    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.122170    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.123991    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 04:07:51.126753    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1929.pem && ln -fs /usr/share/ca-certificates/1929.pem /etc/ssl/certs/1929.pem"
	I0930 04:07:51.129725    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.131008    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 10:37 /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.131035    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.132592    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1929.pem /etc/ssl/certs/51391683.0"
	I0930 04:07:51.135664    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19292.pem && ln -fs /usr/share/ca-certificates/19292.pem /etc/ssl/certs/19292.pem"
	I0930 04:07:51.138460    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.139860    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 10:37 /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.139882    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.141525    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19292.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 04:07:51.144780    5073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 04:07:51.146280    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 04:07:51.148254    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 04:07:51.150148    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 04:07:51.152033    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 04:07:51.153951    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 04:07:51.155940    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 04:07:51.157788    5073 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:51.157867    5073 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:07:51.172283    5073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 04:07:51.175399    5073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 04:07:51.175410    5073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 04:07:51.175439    5073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 04:07:51.178775    5073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:07:51.179760    5073 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-312000" does not appear in /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:07:51.180154    5073 kubeconfig.go:62] /Users/jenkins/minikube-integration/19734-1406/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-312000" cluster setting kubeconfig missing "stopped-upgrade-312000" context setting]
	I0930 04:07:51.180355    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.180812    5073 kapi.go:59] client config for stopped-upgrade-312000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10662e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:07:51.181148    5073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 04:07:51.184398    5073 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-312000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0930 04:07:51.184405    5073 kubeadm.go:1160] stopping kube-system containers ...
	I0930 04:07:51.184458    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:07:51.195228    5073 docker.go:483] Stopping containers: [7204ff5e6c12 a6e35c8796d8 5c05fceb7aa1 6c0f2823a096 9a2747d15d5c 3d6f8a951f44 82cb48f54510 5590b05fa90f]
	I0930 04:07:51.195308    5073 ssh_runner.go:195] Run: docker stop 7204ff5e6c12 a6e35c8796d8 5c05fceb7aa1 6c0f2823a096 9a2747d15d5c 3d6f8a951f44 82cb48f54510 5590b05fa90f
	I0930 04:07:51.205936    5073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 04:07:51.211941    5073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:07:51.214578    5073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:07:51.214588    5073 kubeadm.go:157] found existing configuration files:
	
	I0930 04:07:51.214617    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf
	I0930 04:07:51.217500    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:07:51.217532    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:07:51.220393    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf
	I0930 04:07:51.222787    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:07:51.222813    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:07:51.225553    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf
	I0930 04:07:51.228420    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:07:51.228445    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:07:51.231230    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf
	I0930 04:07:51.233714    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:07:51.233737    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:07:51.236771    5073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:07:51.239587    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.260604    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.664368    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.808293    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.837546    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.866228    5073 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:07:51.866323    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.368415    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.868385    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.873003    5073 api_server.go:72] duration metric: took 1.006789917s to wait for apiserver process to appear ...
	I0930 04:07:52.873011    5073 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:07:52.873025    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:54.494590    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:57.875033    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:57.875076    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:59.496405    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:59.496517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:07:59.509515    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:07:59.509601    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:07:59.520167    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:07:59.520258    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:07:59.536736    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:07:59.536825    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:07:59.553775    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:07:59.553862    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:07:59.564357    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:07:59.564438    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:07:59.581392    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:07:59.581474    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:07:59.592084    4929 logs.go:276] 0 containers: []
	W0930 04:07:59.592099    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:07:59.592177    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:07:59.602868    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:07:59.602886    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:07:59.602891    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:07:59.641787    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:07:59.641804    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:07:59.670301    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:07:59.670317    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:07:59.692565    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:07:59.692577    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:07:59.716092    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:07:59.716102    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:07:59.727656    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:07:59.727666    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:07:59.731780    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:07:59.731788    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:07:59.748670    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:07:59.748680    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:07:59.764811    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:07:59.764820    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:07:59.776502    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:07:59.776517    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:07:59.787756    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:07:59.787767    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:07:59.806663    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:07:59.806672    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:07:59.822440    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:07:59.822448    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:07:59.839408    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:07:59.839422    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:07:59.852047    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:07:59.852055    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:07:59.886520    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:07:59.886530    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:07:59.897744    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:07:59.897755    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:02.414066    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:02.875411    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:02.875465    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:07.416382    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:07.416682    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:07.436957    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:07.437078    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:07.451595    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:07.451688    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:07.463651    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:07.463737    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:07.474582    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:07.474662    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:07.484857    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:07.484941    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:07.495090    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:07.495168    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:07.505201    4929 logs.go:276] 0 containers: []
	W0930 04:08:07.505211    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:07.505284    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:07.515843    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:07.515860    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:07.515865    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:07.556559    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:07.556570    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:07.574850    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:07.574861    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:07.592161    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:07.592172    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:07.606123    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:07.606135    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:07.618352    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:07.618361    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:07.632518    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:07.632529    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:07.643502    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:07.643513    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:07.657821    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:07.657834    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:07.683254    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:07.683267    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:07.695181    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:07.695195    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:07.712660    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:07.712675    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:07.753339    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:07.753346    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:07.757660    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:07.757666    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:07.775745    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:07.775755    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:07.787751    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:07.787760    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:07.807195    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:07.807204    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:07.875944    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:07.875968    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:10.320858    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:12.876355    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:12.876378    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:15.323030    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:15.323228    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:15.339824    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:15.339915    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:15.352795    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:15.352884    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:15.364039    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:15.364131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:15.374497    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:15.374578    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:15.385093    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:15.385174    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:15.395553    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:15.395629    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:15.406096    4929 logs.go:276] 0 containers: []
	W0930 04:08:15.406110    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:15.406182    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:15.416767    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:15.416786    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:15.416791    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:15.455872    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:15.455878    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:15.491602    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:15.491617    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:15.514301    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:15.514316    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:15.528938    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:15.528952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:15.540640    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:15.540650    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:15.551522    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:15.551532    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:15.565900    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:15.565911    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:15.580398    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:15.580407    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:15.596493    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:15.596504    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:15.614664    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:15.614674    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:15.632834    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:15.632849    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:15.656392    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:15.656402    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:15.668982    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:15.668996    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:15.673558    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:15.673564    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:15.701204    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:15.701214    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:15.713036    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:15.713050    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:17.876979    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:17.877047    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:18.231056    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:22.878023    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:22.878065    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:23.233177    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:23.233369    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:23.246792    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:23.246887    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:23.258147    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:23.258234    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:23.268428    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:23.268511    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:23.279145    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:23.279231    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:23.289796    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:23.289884    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:23.303421    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:23.303507    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:23.314048    4929 logs.go:276] 0 containers: []
	W0930 04:08:23.314060    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:23.314131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:23.324703    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:23.324720    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:23.324725    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:23.365874    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:23.365886    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:23.382991    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:23.383002    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:23.394601    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:23.394611    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:23.412126    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:23.412136    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:23.423858    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:23.423868    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:23.459405    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:23.459418    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:23.471089    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:23.471099    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:23.485603    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:23.485612    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:23.497064    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:23.497075    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:23.509755    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:23.509766    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:23.514429    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:23.514439    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:23.533403    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:23.533412    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:23.547204    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:23.547215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:23.561508    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:23.561518    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:23.575319    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:23.575333    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:23.586853    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:23.586865    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:26.111321    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:27.879224    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:27.879263    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:31.113673    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:31.113961    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:31.134908    4929 logs.go:276] 2 containers: [864f592786f2 8408bdfbfd17]
	I0930 04:08:31.135025    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:31.150406    4929 logs.go:276] 2 containers: [7a69c1942b12 a764d080a1f9]
	I0930 04:08:31.150499    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:31.165661    4929 logs.go:276] 1 containers: [537166960acb]
	I0930 04:08:31.165745    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:31.180946    4929 logs.go:276] 2 containers: [1abd48636662 8df176c83bba]
	I0930 04:08:31.181041    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:31.191016    4929 logs.go:276] 1 containers: [f51c21bd868e]
	I0930 04:08:31.191105    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:31.201635    4929 logs.go:276] 2 containers: [73a6cf14fb81 f856ca1ca41a]
	I0930 04:08:31.201716    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:31.212201    4929 logs.go:276] 0 containers: []
	W0930 04:08:31.212211    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:31.212277    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:31.226859    4929 logs.go:276] 2 containers: [11e436cafb0c 90f072b83535]
	I0930 04:08:31.226878    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:31.226883    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:31.268209    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:31.268221    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:31.303843    4929 logs.go:123] Gathering logs for kube-scheduler [1abd48636662] ...
	I0930 04:08:31.303856    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1abd48636662"
	I0930 04:08:31.315732    4929 logs.go:123] Gathering logs for kube-proxy [f51c21bd868e] ...
	I0930 04:08:31.315744    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f51c21bd868e"
	I0930 04:08:31.327404    4929 logs.go:123] Gathering logs for storage-provisioner [11e436cafb0c] ...
	I0930 04:08:31.327414    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e436cafb0c"
	I0930 04:08:31.338891    4929 logs.go:123] Gathering logs for kube-apiserver [8408bdfbfd17] ...
	I0930 04:08:31.338906    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8408bdfbfd17"
	I0930 04:08:31.358250    4929 logs.go:123] Gathering logs for etcd [a764d080a1f9] ...
	I0930 04:08:31.358261    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a764d080a1f9"
	I0930 04:08:31.376365    4929 logs.go:123] Gathering logs for storage-provisioner [90f072b83535] ...
	I0930 04:08:31.376376    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90f072b83535"
	I0930 04:08:31.393948    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:08:31.393959    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:31.406130    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:31.406142    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:31.410918    4929 logs.go:123] Gathering logs for kube-controller-manager [73a6cf14fb81] ...
	I0930 04:08:31.410927    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a6cf14fb81"
	I0930 04:08:31.428435    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:31.428445    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:31.451146    4929 logs.go:123] Gathering logs for kube-apiserver [864f592786f2] ...
	I0930 04:08:31.451154    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 864f592786f2"
	I0930 04:08:31.465506    4929 logs.go:123] Gathering logs for etcd [7a69c1942b12] ...
	I0930 04:08:31.465515    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a69c1942b12"
	I0930 04:08:31.479065    4929 logs.go:123] Gathering logs for coredns [537166960acb] ...
	I0930 04:08:31.479075    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537166960acb"
	I0930 04:08:31.490377    4929 logs.go:123] Gathering logs for kube-scheduler [8df176c83bba] ...
	I0930 04:08:31.490390    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df176c83bba"
	I0930 04:08:31.504498    4929 logs.go:123] Gathering logs for kube-controller-manager [f856ca1ca41a] ...
	I0930 04:08:31.504508    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f856ca1ca41a"
	I0930 04:08:32.880648    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:32.880674    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:34.018563    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:39.020900    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:39.020989    4929 kubeadm.go:597] duration metric: took 4m4.37681725s to restartPrimaryControlPlane
	W0930 04:08:39.021048    4929 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 04:08:39.021078    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0930 04:08:39.970777    4929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 04:08:39.976002    4929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:08:39.978875    4929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:08:39.981696    4929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:08:39.981702    4929 kubeadm.go:157] found existing configuration files:
	
	I0930 04:08:39.981726    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0930 04:08:39.984201    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:08:39.984232    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:08:39.986856    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0930 04:08:39.989681    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:08:39.989708    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:08:39.992279    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0930 04:08:39.994832    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:08:39.994859    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:08:39.998020    4929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0930 04:08:40.000779    4929 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:08:40.000807    4929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:08:40.003187    4929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 04:08:40.019667    4929 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0930 04:08:40.019762    4929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 04:08:40.066485    4929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 04:08:40.066546    4929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 04:08:40.066662    4929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 04:08:40.122038    4929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 04:08:37.882404    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:37.882461    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:40.127915    4929 out.go:235]   - Generating certificates and keys ...
	I0930 04:08:40.127957    4929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 04:08:40.127987    4929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 04:08:40.128030    4929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 04:08:40.128060    4929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 04:08:40.128095    4929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 04:08:40.128124    4929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 04:08:40.128179    4929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 04:08:40.128219    4929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 04:08:40.128268    4929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 04:08:40.128316    4929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 04:08:40.128336    4929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 04:08:40.128366    4929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 04:08:40.357397    4929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 04:08:40.470907    4929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 04:08:40.509979    4929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 04:08:40.679992    4929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 04:08:40.710945    4929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 04:08:40.711357    4929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 04:08:40.711466    4929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 04:08:40.796420    4929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 04:08:40.800599    4929 out.go:235]   - Booting up control plane ...
	I0930 04:08:40.800652    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 04:08:40.800696    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 04:08:40.800726    4929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 04:08:40.800763    4929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 04:08:40.800866    4929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 04:08:42.884680    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:42.884706    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:45.304306    4929 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503021 seconds
	I0930 04:08:45.304480    4929 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 04:08:45.311147    4929 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 04:08:45.820090    4929 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 04:08:45.820180    4929 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-520000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 04:08:46.324912    4929 kubeadm.go:310] [bootstrap-token] Using token: 7c3uuf.lkibzyvgf4w6zyq5
	I0930 04:08:46.329077    4929 out.go:235]   - Configuring RBAC rules ...
	I0930 04:08:46.329138    4929 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 04:08:46.329184    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 04:08:46.336052    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 04:08:46.337030    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 04:08:46.337854    4929 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 04:08:46.338654    4929 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 04:08:46.342865    4929 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 04:08:46.488240    4929 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 04:08:46.729119    4929 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 04:08:46.729578    4929 kubeadm.go:310] 
	I0930 04:08:46.729614    4929 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 04:08:46.729621    4929 kubeadm.go:310] 
	I0930 04:08:46.729680    4929 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 04:08:46.729688    4929 kubeadm.go:310] 
	I0930 04:08:46.729704    4929 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 04:08:46.729736    4929 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 04:08:46.729768    4929 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 04:08:46.729776    4929 kubeadm.go:310] 
	I0930 04:08:46.729804    4929 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 04:08:46.729809    4929 kubeadm.go:310] 
	I0930 04:08:46.729829    4929 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 04:08:46.729832    4929 kubeadm.go:310] 
	I0930 04:08:46.729858    4929 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 04:08:46.729892    4929 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 04:08:46.729925    4929 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 04:08:46.729928    4929 kubeadm.go:310] 
	I0930 04:08:46.729979    4929 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 04:08:46.730016    4929 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 04:08:46.730020    4929 kubeadm.go:310] 
	I0930 04:08:46.730064    4929 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7c3uuf.lkibzyvgf4w6zyq5 \
	I0930 04:08:46.730113    4929 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d \
	I0930 04:08:46.730130    4929 kubeadm.go:310] 	--control-plane 
	I0930 04:08:46.730135    4929 kubeadm.go:310] 
	I0930 04:08:46.730213    4929 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 04:08:46.730217    4929 kubeadm.go:310] 
	I0930 04:08:46.730304    4929 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7c3uuf.lkibzyvgf4w6zyq5 \
	I0930 04:08:46.730359    4929 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d 
	I0930 04:08:46.730412    4929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 04:08:46.730418    4929 cni.go:84] Creating CNI manager for ""
	I0930 04:08:46.730428    4929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:08:46.734199    4929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 04:08:46.741407    4929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 04:08:46.744275    4929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 04:08:46.748905    4929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 04:08:46.748953    4929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 04:08:46.748979    4929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-520000 minikube.k8s.io/updated_at=2024_09_30T04_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=running-upgrade-520000 minikube.k8s.io/primary=true
	I0930 04:08:46.788572    4929 kubeadm.go:1113] duration metric: took 39.660041ms to wait for elevateKubeSystemPrivileges
	I0930 04:08:46.788613    4929 ops.go:34] apiserver oom_adj: -16
	I0930 04:08:46.788618    4929 kubeadm.go:394] duration metric: took 4m12.158657917s to StartCluster
	I0930 04:08:46.788629    4929 settings.go:142] acquiring lock: {Name:mk8d331f80592adde11c8565cba0670e3b2db485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:08:46.788714    4929 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:08:46.789066    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:08:46.789259    4929 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:08:46.789304    4929 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 04:08:46.789368    4929 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-520000"
	I0930 04:08:46.789377    4929 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-520000"
	W0930 04:08:46.789380    4929 addons.go:243] addon storage-provisioner should already be in state true
	I0930 04:08:46.789382    4929 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-520000"
	I0930 04:08:46.789390    4929 host.go:66] Checking if "running-upgrade-520000" exists ...
	I0930 04:08:46.789394    4929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-520000"
	I0930 04:08:46.789590    4929 config.go:182] Loaded profile config "running-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:08:46.789695    4929 retry.go:31] will retry after 1.47803988s: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/monitor: connect: connection refused
	I0930 04:08:46.792383    4929 out.go:177] * Verifying Kubernetes components...
	I0930 04:08:46.800308    4929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:08:46.804221    4929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:08:46.808355    4929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:08:46.808363    4929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 04:08:46.808370    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:08:46.904030    4929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:08:46.909084    4929 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:08:46.909134    4929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:08:46.913246    4929 api_server.go:72] duration metric: took 123.974792ms to wait for apiserver process to appear ...
	I0930 04:08:46.913257    4929 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:08:46.913266    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:46.962163    4929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:08:47.884920    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:47.884944    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:48.270746    4929 kapi.go:59] client config for running-upgrade-520000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/running-upgrade-520000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:08:48.270890    4929 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-520000"
	W0930 04:08:48.270896    4929 addons.go:243] addon default-storageclass should already be in state true
	I0930 04:08:48.270909    4929 host.go:66] Checking if "running-upgrade-520000" exists ...
	I0930 04:08:48.271563    4929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 04:08:48.271571    4929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 04:08:48.271577    4929 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/running-upgrade-520000/id_rsa Username:docker}
	I0930 04:08:48.308094    4929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 04:08:48.359532    4929 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 04:08:48.359546    4929 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 04:08:51.915255    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:51.915284    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:52.887084    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:52.887273    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:52.898461    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:08:52.898561    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:52.908802    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:08:52.908879    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:52.918684    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:08:52.918776    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:52.928742    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:08:52.928822    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:52.939123    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:08:52.939215    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:52.949240    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:08:52.949323    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:52.959286    5073 logs.go:276] 0 containers: []
	W0930 04:08:52.959298    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:52.959373    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:52.969798    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:08:52.969814    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:52.969820    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:53.009704    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:53.009713    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:53.088188    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:08:53.088202    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:08:53.100292    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:08:53.100308    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:08:53.112011    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:08:53.112021    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:08:53.129133    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:08:53.129144    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:08:53.144933    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:08:53.144945    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:08:53.160977    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:53.160988    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:53.185893    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:08:53.185902    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:08:53.200789    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:08:53.200800    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:08:53.220505    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:08:53.220516    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:08:53.236129    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:08:53.236139    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:08:53.253037    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:53.253047    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:53.257756    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:08:53.257765    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:08:53.283990    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:08:53.284009    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:08:53.299252    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:08:53.299262    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:56.915448    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:56.915474    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:55.811471    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:01.915750    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:01.915808    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:00.813759    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:00.813928    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:00.824548    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:00.824640    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:00.834620    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:00.834708    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:00.845587    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:00.845676    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:00.858051    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:00.858137    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:00.868896    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:00.868979    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:00.880896    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:00.880979    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:00.891730    5073 logs.go:276] 0 containers: []
	W0930 04:09:00.891756    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:00.891833    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:00.902450    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:00.902470    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:00.902476    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:00.915851    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:00.915863    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:00.933667    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:00.933678    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:00.945643    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:00.945653    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:00.950025    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:00.950033    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:00.975037    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:00.975048    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:00.989596    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:00.989606    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:01.003224    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:01.003235    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:01.019726    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:01.019737    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:01.043904    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:01.043913    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:01.055341    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:01.055352    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:01.067233    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:01.067244    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:01.104713    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:01.104725    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:01.141025    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:01.141039    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:01.153295    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:01.153306    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:01.168334    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:01.168346    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:03.683736    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:06.916289    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:06.916335    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:08.686169    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:08.686332    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:08.699336    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:08.699429    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:08.710163    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:08.710257    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:08.720486    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:08.720579    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:08.731515    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:08.731601    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:08.741919    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:08.742005    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:08.752527    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:08.752605    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:08.762944    5073 logs.go:276] 0 containers: []
	W0930 04:09:08.762958    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:08.763033    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:08.777549    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:08.777566    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:08.777572    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:08.812119    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:08.812130    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:08.825935    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:08.825945    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:08.842831    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:08.842841    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:08.854271    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:08.854287    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:08.867886    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:08.867916    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:08.883865    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:08.883877    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:08.895429    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:08.895439    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:08.907449    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:08.907459    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:08.924145    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:08.924156    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:08.962163    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:08.962172    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:08.973667    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:08.973677    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:08.985741    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:08.985752    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:08.997504    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:08.997513    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:09.001647    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:09.001653    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:09.026552    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:09.026562    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:11.917013    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:11.917052    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:11.555208    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:16.917717    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:16.917737    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0930 04:09:18.361469    4929 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0930 04:09:18.365278    4929 out.go:177] * Enabled addons: storage-provisioner
	I0930 04:09:16.557507    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:16.557643    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:16.571198    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:16.571297    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:16.582695    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:16.582782    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:16.593794    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:16.593874    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:16.605199    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:16.605297    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:16.616554    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:16.616640    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:16.627644    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:16.627724    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:16.638509    5073 logs.go:276] 0 containers: []
	W0930 04:09:16.638521    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:16.638591    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:16.649380    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:16.649402    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:16.649408    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:16.654000    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:16.654008    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:16.668027    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:16.668041    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:16.680382    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:16.680392    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:16.693341    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:16.693357    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:16.730952    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:16.730964    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:16.746820    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:16.746830    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:16.760966    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:16.760976    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:16.772425    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:16.772441    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:16.797896    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:16.797904    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:16.845085    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:16.845100    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:16.861612    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:16.861622    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:16.872946    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:16.872954    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:16.885350    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:16.885359    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:16.899556    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:16.899571    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:16.924357    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:16.924365    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:19.443402    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:18.377047    4929 addons.go:510] duration metric: took 31.588195s for enable addons: enabled=[storage-provisioner]
	I0930 04:09:21.919024    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:21.919085    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:24.445648    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:24.445888    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:24.468513    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:24.468638    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:24.483292    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:24.483387    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:24.495977    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:24.496068    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:24.506284    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:24.506365    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:24.519805    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:24.519890    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:24.530324    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:24.530401    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:24.540931    5073 logs.go:276] 0 containers: []
	W0930 04:09:24.540942    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:24.541012    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:24.551338    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:24.551357    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:24.551362    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:24.587206    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:24.587219    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:24.612857    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:24.612874    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:24.629293    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:24.629305    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:24.640944    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:24.640954    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:24.653010    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:24.653019    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:24.657380    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:24.657387    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:24.684927    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:24.684937    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:24.696744    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:24.696754    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:24.734223    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:24.734231    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:24.747666    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:24.747678    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:24.758978    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:24.758987    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:24.770614    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:24.770624    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:24.786185    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:24.786202    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:24.804463    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:24.804475    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:24.830225    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:24.830245    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:26.920540    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:26.920581    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:27.344509    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:31.922442    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:31.922485    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:32.346652    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:32.346868    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:32.363877    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:32.363963    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:32.378478    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:32.378570    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:32.390158    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:32.390245    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:32.400467    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:32.400551    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:32.410735    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:32.410821    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:32.421222    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:32.421302    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:32.431301    5073 logs.go:276] 0 containers: []
	W0930 04:09:32.431313    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:32.431385    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:32.441990    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:32.442008    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:32.442013    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:32.453538    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:32.453549    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:32.465188    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:32.465199    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:32.490610    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:32.490618    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:32.504763    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:32.504773    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:32.516092    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:32.516103    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:32.528515    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:32.528527    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:32.565721    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:32.565734    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:32.600800    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:32.600811    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:32.617928    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:32.617939    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:32.634802    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:32.634813    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:32.649117    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:32.649126    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:32.653339    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:32.653347    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:32.677342    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:32.677358    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:32.691534    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:32.691544    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:32.703405    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:32.703417    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:35.221142    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:36.924681    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:36.924702    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:40.223479    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:40.223736    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:40.248036    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:40.248178    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:40.264364    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:40.264465    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:40.278285    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:40.278381    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:40.292681    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:40.292773    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:40.303316    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:40.303398    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:40.313556    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:40.313638    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:40.324049    5073 logs.go:276] 0 containers: []
	W0930 04:09:40.324062    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:40.324137    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:40.334502    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:40.334520    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:40.334526    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:40.351946    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:40.351957    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:40.363781    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:40.363792    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:40.402513    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:40.402529    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:40.406650    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:40.406656    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:40.420642    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:40.420651    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:40.432106    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:40.432116    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:40.444538    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:40.444548    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:40.479337    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:40.479352    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:40.493975    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:40.493985    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:40.511978    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:40.511993    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:40.528671    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:40.528681    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:40.546160    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:40.546171    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:40.557523    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:40.557537    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:40.582281    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:40.582293    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:40.608589    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:40.608600    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:41.925341    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:41.925387    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:43.122184    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:46.926833    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:46.926991    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:46.940368    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:09:46.940450    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:46.951161    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:09:46.951255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:46.961634    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:09:46.961718    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:46.972318    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:09:46.972397    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:46.983095    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:09:46.983179    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:46.993438    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:09:46.993517    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:47.003957    4929 logs.go:276] 0 containers: []
	W0930 04:09:47.003967    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:47.004040    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:47.014427    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:09:47.014442    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:09:47.014448    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:47.030154    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:47.030170    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:47.065698    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:47.065709    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:47.102141    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:09:47.102157    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:09:47.120354    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:09:47.120368    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:09:47.134436    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:09:47.134445    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:09:47.146024    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:09:47.146034    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:09:47.165353    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:09:47.165364    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:09:47.176359    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:47.176368    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:47.180697    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:09:47.180706    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:09:47.192308    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:09:47.192317    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:09:47.204057    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:09:47.204066    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:09:47.221856    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:47.221867    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:48.122846    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:48.123100    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:48.147360    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:48.147486    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:48.164236    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:48.164337    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:48.176892    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:48.176983    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:48.188659    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:48.188743    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:48.198820    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:48.198902    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:48.209212    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:48.209291    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:48.219109    5073 logs.go:276] 0 containers: []
	W0930 04:09:48.219125    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:48.219191    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:48.229839    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:48.229861    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:48.229866    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:48.241847    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:48.241859    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:48.256707    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:48.256717    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:48.283126    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:48.283137    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:48.294755    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:48.294769    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:48.307127    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:48.307140    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:48.319612    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:48.319627    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:48.345112    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:48.345125    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:48.383558    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:48.383565    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:48.387828    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:48.387835    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:48.403347    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:48.403360    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:48.423277    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:48.423290    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:48.446707    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:48.446714    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:48.481837    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:48.481852    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:48.496013    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:48.496025    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:48.513291    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:48.513303    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:49.747343    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:51.024906    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:54.746910    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:54.747433    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:54.779354    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:09:54.779513    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:54.797665    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:09:54.797785    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:54.812005    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:09:54.812092    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:54.824179    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:09:54.824252    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:54.834964    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:09:54.835047    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:54.845250    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:09:54.845333    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:54.855877    4929 logs.go:276] 0 containers: []
	W0930 04:09:54.855889    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:54.855965    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:54.867897    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:09:54.867913    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:09:54.867918    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:09:54.883088    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:09:54.883103    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:09:54.899389    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:09:54.899404    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:09:54.911788    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:09:54.911798    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:09:54.930001    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:54.930011    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:54.954840    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:54.954847    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:54.989089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:54.989105    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:55.023904    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:09:55.023916    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:09:55.038268    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:09:55.038278    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:55.051150    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:09:55.051165    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:09:55.064065    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:55.064079    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:55.068598    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:09:55.068605    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:09:55.080181    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:09:55.080190    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:09:57.592368    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:56.024843    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:56.025407    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:56.060880    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:56.061048    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:56.079913    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:56.080031    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:56.094550    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:56.094647    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:56.106928    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:56.107008    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:56.117886    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:56.117965    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:56.128899    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:56.128984    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:56.139605    5073 logs.go:276] 0 containers: []
	W0930 04:09:56.139619    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:56.139695    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:56.150505    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:56.150523    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:56.150531    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:56.189324    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:56.189334    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:56.223925    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:56.223936    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:56.247952    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:56.247962    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:56.261709    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:56.261722    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:56.280202    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:56.280213    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:56.291667    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:56.291680    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:56.306071    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:56.306084    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:56.319893    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:56.319903    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:56.336809    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:56.336820    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:56.348920    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:56.348930    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:56.372028    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:56.372036    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:56.384245    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:56.384256    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:56.388388    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:56.388395    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:56.413218    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:56.413228    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:56.427234    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:56.427244    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:58.941485    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:02.593051    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:02.593269    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:02.604683    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:02.604777    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:02.615302    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:02.615383    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:02.626566    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:02.626654    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:02.637190    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:02.637263    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:02.647500    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:02.647583    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:02.658355    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:02.658442    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:02.668023    4929 logs.go:276] 0 containers: []
	W0930 04:10:02.668042    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:02.668116    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:02.678268    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:02.678283    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:02.678288    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:02.683542    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:02.683551    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:02.696906    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:02.696919    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:02.712089    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:02.712099    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:02.724277    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:02.724287    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:02.741994    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:02.742004    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:02.776453    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:02.776463    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:02.813189    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:02.813202    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:02.830936    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:02.830947    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:02.844665    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:02.844674    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:02.864970    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:02.864985    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:02.877017    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:02.877027    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:02.900392    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:02.900401    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:03.942345    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:03.942630    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:03.966930    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:03.967067    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:03.982422    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:03.982517    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:03.994696    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:03.994781    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:04.005479    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:04.005563    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:04.016184    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:04.016267    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:04.028507    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:04.028593    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:04.038616    5073 logs.go:276] 0 containers: []
	W0930 04:10:04.038627    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:04.038691    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:04.049054    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:04.049074    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:04.049080    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:04.089044    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:04.089059    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:04.103222    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:04.103233    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:04.114837    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:04.114850    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:04.126837    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:04.126851    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:04.150167    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:04.150177    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:04.186215    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:04.186231    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:04.211192    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:04.211207    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:04.225354    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:04.225369    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:04.242759    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:04.242773    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:04.260864    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:04.260881    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:04.275620    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:04.275631    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:04.287438    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:04.287447    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:04.299552    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:04.299565    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:04.312558    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:04.312571    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:04.316802    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:04.316812    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:05.412034    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:06.830507    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:10.412970    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:10.413138    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:10.426952    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:10.427049    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:10.438774    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:10.438856    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:10.450077    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:10.450165    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:10.468942    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:10.469024    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:10.479188    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:10.479271    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:10.489532    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:10.489609    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:10.499758    4929 logs.go:276] 0 containers: []
	W0930 04:10:10.499786    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:10.499870    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:10.510748    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:10.510764    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:10.510772    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:10.522273    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:10.522287    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:10.539396    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:10.539410    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:10.550849    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:10.550859    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:10.566236    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:10.566247    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:10.602149    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:10.602156    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:10.639015    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:10.639027    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:10.662091    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:10.662102    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:10.674493    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:10.674509    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:10.693122    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:10.693134    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:10.741068    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:10.741080    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:10.746428    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:10.746435    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:10.760748    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:10.760759    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:11.831764    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:11.831889    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:11.843394    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:11.843493    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:11.854370    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:11.854452    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:11.865083    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:11.865171    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:11.878654    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:11.878745    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:11.888668    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:11.888750    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:11.899247    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:11.899330    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:11.909876    5073 logs.go:276] 0 containers: []
	W0930 04:10:11.909887    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:11.909956    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:11.920348    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:11.920369    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:11.920374    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:11.959424    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:11.959434    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:11.984058    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:11.984068    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:11.998343    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:11.998355    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:12.010493    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:12.010505    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:12.028520    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:12.028533    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:12.032851    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:12.032860    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:12.067385    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:12.067401    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:12.081510    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:12.081519    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:12.092806    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:12.092817    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:12.112706    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:12.112717    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:12.126802    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:12.126813    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:12.138259    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:12.138271    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:12.160981    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:12.160992    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:12.171852    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:12.171864    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:12.198712    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:12.198724    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:14.726176    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:13.276713    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:19.728224    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:19.728461    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:19.751390    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:19.751537    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:19.768070    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:19.768172    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:19.780835    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:19.780925    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:19.792038    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:19.792116    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:19.802402    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:19.802474    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:19.821292    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:19.821366    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:19.831165    5073 logs.go:276] 0 containers: []
	W0930 04:10:19.831174    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:19.831237    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:19.841892    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:19.841910    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:19.841916    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:19.880298    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:19.880309    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:19.894796    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:19.894810    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:19.931262    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:19.931272    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:19.935695    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:19.935702    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:19.948712    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:19.948724    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:19.960161    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:19.960169    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:19.973329    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:19.973338    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:19.998798    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:19.998808    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:20.012925    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:20.012933    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:20.024602    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:20.024614    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:20.041171    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:20.041180    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:20.052911    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:20.052922    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:20.067210    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:20.067221    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:20.079015    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:20.079025    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:20.095757    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:20.095768    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:18.278292    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:18.278426    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:18.291884    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:18.291979    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:18.303440    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:18.303520    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:18.313882    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:18.313964    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:18.326120    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:18.326210    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:18.336965    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:18.337058    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:18.347782    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:18.347867    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:18.358144    4929 logs.go:276] 0 containers: []
	W0930 04:10:18.358155    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:18.358225    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:18.369066    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:18.369083    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:18.369089    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:18.373990    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:18.374002    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:18.389915    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:18.389931    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:18.410400    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:18.410414    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:18.422088    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:18.422101    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:18.433308    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:18.433318    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:18.466462    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:18.466470    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:18.489109    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:18.489118    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:18.507109    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:18.507120    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:18.518553    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:18.518565    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:18.536105    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:18.536114    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:18.559646    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:18.559659    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:18.571300    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:18.571313    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:21.108740    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:22.620379    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:26.110754    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:26.111254    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:26.154900    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:26.155068    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:26.174839    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:26.174954    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:26.190112    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:26.190211    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:26.203704    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:26.203784    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:26.219117    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:26.219211    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:26.230273    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:26.230353    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:26.241181    4929 logs.go:276] 0 containers: []
	W0930 04:10:26.241193    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:26.241267    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:26.251665    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:26.251679    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:26.251684    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:26.264253    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:26.264264    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:26.282417    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:26.282434    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:26.294102    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:26.294117    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:26.298362    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:26.298369    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:26.333338    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:26.333353    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:26.347631    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:26.347644    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:26.359281    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:26.359297    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:26.374513    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:26.374523    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:26.385954    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:26.385968    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:26.410506    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:26.410515    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:26.445471    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:26.445479    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:26.460023    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:26.460038    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:27.622396    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:27.622726    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:27.648166    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:27.648315    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:27.665908    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:27.666010    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:27.683646    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:27.683736    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:27.694607    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:27.694701    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:27.705006    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:27.705092    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:27.715423    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:27.715521    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:27.725591    5073 logs.go:276] 0 containers: []
	W0930 04:10:27.725602    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:27.725668    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:27.736566    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:27.736585    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:27.736591    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:27.750081    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:27.750096    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:27.766767    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:27.766778    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:27.790096    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:27.790104    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:27.802388    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:27.802398    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:27.841159    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:27.841167    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:27.876126    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:27.876141    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:27.901649    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:27.901660    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:27.916284    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:27.916295    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:27.927287    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:27.927297    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:27.931533    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:27.931542    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:27.954698    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:27.954707    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:27.973270    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:27.973281    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:27.984602    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:27.984611    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:27.996543    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:27.996558    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:28.008166    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:28.008176    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:30.523824    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:28.973882    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:35.525846    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:35.526055    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:35.547854    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:35.548013    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:35.564655    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:35.564761    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:35.577143    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:35.577231    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:35.588140    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:35.588227    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:35.598941    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:35.599018    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:35.609237    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:35.609314    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:35.619762    5073 logs.go:276] 0 containers: []
	W0930 04:10:35.619774    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:35.619847    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:35.630231    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:35.630247    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:35.630251    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:35.648223    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:35.648236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:35.673069    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:35.673080    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:33.975913    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:33.976261    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:34.004720    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:34.004874    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:34.022637    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:34.022737    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:34.042262    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:34.042349    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:34.053686    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:34.053770    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:34.064378    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:34.064460    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:34.074435    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:34.074512    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:34.083890    4929 logs.go:276] 0 containers: []
	W0930 04:10:34.083903    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:34.083971    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:34.098792    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:34.098809    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:34.098814    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:34.113725    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:34.113735    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:34.128281    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:34.128293    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:34.160523    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:34.160534    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:34.195435    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:34.195442    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:34.200321    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:34.200327    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:34.214072    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:34.214081    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:34.253621    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:34.253632    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:34.266227    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:34.266237    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:34.282128    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:34.282142    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:34.294387    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:34.294401    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:34.329783    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:34.329794    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:34.342152    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:34.342168    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:36.869093    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:35.691282    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:35.691293    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:35.703052    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:35.703062    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:35.714769    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:35.714779    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:35.726820    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:35.726831    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:35.731216    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:35.731227    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:35.743965    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:35.743982    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:35.755225    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:35.755235    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:35.767253    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:35.767264    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:35.802712    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:35.802726    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:35.817265    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:35.817275    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:35.842156    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:35.842164    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:35.881101    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:35.881114    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:35.904402    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:35.904412    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:38.423628    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:41.871177    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:41.871316    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:41.882678    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:41.882775    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:41.893366    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:41.893449    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:41.903556    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:41.903638    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:41.914184    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:41.914262    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:41.924896    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:41.924974    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:41.935464    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:41.935542    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:41.947840    4929 logs.go:276] 0 containers: []
	W0930 04:10:41.947852    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:41.947924    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:41.965291    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:41.965305    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:41.965310    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:41.981022    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:41.981032    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:42.002950    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:42.002960    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:42.028348    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:42.028355    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:42.061089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:42.061096    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:42.097920    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:42.097929    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:42.110146    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:42.110161    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:42.122019    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:42.122029    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:42.140443    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:42.140454    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:42.153394    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:42.153405    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:42.165073    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:42.165084    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:42.169491    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:42.169499    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:42.183692    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:42.183703    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:43.425856    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:43.426042    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:43.441076    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:43.441173    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:43.453998    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:43.454087    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:43.464751    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:43.464824    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:43.476817    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:43.476897    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:43.487118    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:43.487216    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:43.508004    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:43.508087    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:43.517876    5073 logs.go:276] 0 containers: []
	W0930 04:10:43.517891    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:43.517959    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:43.528523    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:43.528540    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:43.528546    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:43.543101    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:43.543114    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:43.582223    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:43.582232    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:43.617287    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:43.617298    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:43.642850    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:43.642866    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:43.660224    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:43.660236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:43.672437    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:43.672448    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:43.694796    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:43.694806    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:43.698905    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:43.698914    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:43.712553    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:43.712563    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:43.724186    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:43.724198    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:43.736819    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:43.736832    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:43.748860    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:43.748872    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:43.761182    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:43.761197    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:43.777710    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:43.777724    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:43.792470    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:43.792484    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:44.699620    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:46.305913    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:49.701993    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:49.702410    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:49.733790    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:49.733940    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:49.752890    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:49.753004    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:49.766610    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:49.766712    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:49.778746    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:49.778834    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:49.789992    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:49.790072    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:49.800705    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:49.800786    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:49.810922    4929 logs.go:276] 0 containers: []
	W0930 04:10:49.810935    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:49.811007    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:49.824032    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:49.824049    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:49.824055    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:49.859229    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:49.859240    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:49.863943    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:49.863951    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:49.899125    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:49.899137    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:49.917548    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:49.917561    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:49.934432    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:49.934445    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:49.959249    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:49.959257    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:49.970602    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:49.970614    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:49.985476    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:49.985486    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:49.999550    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:49.999561    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:50.011324    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:50.011340    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:50.029960    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:50.029970    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:50.044900    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:50.044910    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:52.564973    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:51.308154    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:51.308410    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:51.328927    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:51.329062    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:51.343041    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:51.343132    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:51.355696    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:51.355780    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:51.365941    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:51.366027    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:51.380751    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:51.380832    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:51.397688    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:51.397788    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:51.419361    5073 logs.go:276] 0 containers: []
	W0930 04:10:51.419373    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:51.419442    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:51.429761    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:51.429784    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:51.429790    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:51.449106    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:51.449121    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:51.460574    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:51.460584    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:51.497986    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:51.497996    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:51.514848    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:51.514860    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:51.529260    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:51.529271    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:51.540800    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:51.540813    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:51.558443    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:51.558453    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:51.571180    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:51.571193    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:51.575231    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:51.575237    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:51.588713    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:51.588723    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:51.620006    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:51.620016    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:51.634301    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:51.634316    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:51.649478    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:51.649488    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:51.672278    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:51.672287    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:51.683870    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:51.683884    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:54.223094    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:57.567126    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:57.567343    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:57.582959    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:10:57.583065    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:57.595120    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:10:57.595236    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:57.606537    4929 logs.go:276] 2 containers: [85a0a5385195 4b60eaea6a29]
	I0930 04:10:57.606618    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:57.617044    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:10:57.617127    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:57.627673    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:10:57.627754    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:57.639142    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:10:57.639229    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:57.649289    4929 logs.go:276] 0 containers: []
	W0930 04:10:57.649301    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:57.649369    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:57.659662    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:10:57.659677    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:10:57.659682    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:10:57.674429    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:10:57.674443    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:10:57.687000    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:10:57.687011    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:10:57.698864    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:10:57.698875    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:10:57.718727    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:10:57.718738    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:10:57.744741    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:57.744755    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:57.773650    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:10:57.773665    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:57.808634    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:57.808646    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:57.816023    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:57.816036    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:57.908421    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:10:57.908436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:10:57.923141    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:10:57.923151    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:10:57.934945    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:10:57.934957    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:10:57.950231    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:57.950242    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:59.225422    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:59.225620    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:59.238521    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:59.238621    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:59.249236    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:59.249316    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:59.260111    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:59.260191    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:59.270713    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:59.270802    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:59.281540    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:59.281627    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:59.294489    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:59.294576    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:59.304919    5073 logs.go:276] 0 containers: []
	W0930 04:10:59.304935    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:59.305005    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:59.314745    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:59.314763    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:59.314768    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:59.329108    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:59.329119    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:59.341308    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:59.341321    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:59.379419    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:59.379430    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:59.395170    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:59.395184    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:59.414342    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:59.414368    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:59.428033    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:59.428044    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:59.452347    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:59.452357    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:59.464072    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:59.464083    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:59.498755    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:59.498767    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:59.512518    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:59.512528    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:59.538620    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:59.538632    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:59.549734    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:59.549744    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:59.566392    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:59.566409    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:59.570950    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:59.570956    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:59.582726    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:59.582742    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:00.486490    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:02.096471    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:05.488668    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:05.488956    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:05.510648    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:05.510765    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:05.527070    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:05.527173    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:05.539733    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:05.539827    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:05.550695    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:05.550776    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:05.561147    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:05.561221    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:05.571604    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:05.571682    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:05.582159    4929 logs.go:276] 0 containers: []
	W0930 04:11:05.582172    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:05.582236    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:05.593544    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:05.593574    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:05.593579    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:05.607790    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:05.607800    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:05.619048    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:05.619058    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:05.630766    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:05.630777    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:05.656335    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:05.656346    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:05.691289    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:05.691297    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:05.703444    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:05.703457    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:05.715552    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:05.715565    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:05.750344    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:05.750356    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:05.766032    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:05.766043    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:05.778875    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:05.778886    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:05.796403    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:05.796413    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:05.810910    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:05.810920    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:05.822728    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:05.822737    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:05.834494    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:05.834505    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:07.098782    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:07.099041    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:07.119539    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:07.119643    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:07.133731    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:07.133831    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:07.167991    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:07.168074    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:07.178778    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:07.178868    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:07.189624    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:07.189704    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:07.200646    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:07.200733    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:07.210848    5073 logs.go:276] 0 containers: []
	W0930 04:11:07.210860    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:07.210934    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:07.221412    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:07.221429    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:07.221438    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:07.244918    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:07.244926    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:07.284576    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:07.284592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:07.313736    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:07.313746    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:07.326327    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:07.326342    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:07.342740    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:07.342751    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:07.360206    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:07.360217    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:07.371628    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:07.371638    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:07.385790    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:07.385800    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:07.397397    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:07.397408    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:07.401976    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:07.401983    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:07.436272    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:07.436284    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:07.449270    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:07.449284    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:07.461577    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:07.461592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:07.479502    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:07.479519    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:07.494354    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:07.494364    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:10.007757    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:08.341203    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:15.009986    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:15.010275    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:15.032597    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:15.032740    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:15.047878    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:15.048002    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:15.060630    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:15.060718    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:15.071330    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:15.071418    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:15.081499    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:15.081586    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:15.092116    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:15.092185    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:15.106615    5073 logs.go:276] 0 containers: []
	W0930 04:11:15.106628    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:15.106721    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:15.117565    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:15.117584    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:15.117592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:15.134150    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:15.134160    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:15.157244    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:15.157254    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:15.161464    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:15.161480    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:15.173787    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:15.173799    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:15.186755    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:15.186767    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:15.197972    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:15.197981    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:15.222110    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:15.222124    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:15.238808    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:15.238823    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:15.256380    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:15.256390    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:15.268318    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:15.268328    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:15.299073    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:15.299082    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:15.333717    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:15.333729    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:15.347745    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:15.347754    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:15.359211    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:15.359222    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:15.375494    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:15.375504    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:13.343439    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:13.343718    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:13.364984    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:13.365145    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:13.380663    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:13.380756    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:13.392872    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:13.392961    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:13.403021    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:13.403093    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:13.415582    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:13.415661    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:13.425926    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:13.426004    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:13.435863    4929 logs.go:276] 0 containers: []
	W0930 04:11:13.435874    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:13.435946    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:13.448374    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:13.448392    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:13.448398    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:13.462409    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:13.462420    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:13.474169    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:13.474179    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:13.498254    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:13.498265    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:13.503015    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:13.503021    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:13.515031    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:13.515041    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:13.549840    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:13.549855    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:13.586136    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:13.586146    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:13.603951    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:13.603962    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:13.615800    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:13.615811    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:13.629719    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:13.629731    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:13.641902    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:13.641913    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:13.654244    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:13.654257    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:13.669508    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:13.669518    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:13.680603    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:13.680612    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:16.194197    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:17.914070    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:21.196502    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:21.197255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:21.220819    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:21.220953    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:21.236763    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:21.236892    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:21.254531    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:21.254616    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:21.265515    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:21.265594    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:21.276083    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:21.276163    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:21.287535    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:21.287623    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:21.298056    4929 logs.go:276] 0 containers: []
	W0930 04:11:21.298068    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:21.298142    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:21.309240    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:21.309258    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:21.309264    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:21.344333    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:21.344342    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:21.358959    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:21.358969    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:21.369941    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:21.370098    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:21.382346    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:21.382360    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:21.424195    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:21.424210    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:21.438226    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:21.438236    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:21.450137    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:21.450147    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:21.463411    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:21.463423    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:21.480953    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:21.480963    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:21.485648    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:21.485654    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:21.500572    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:21.500583    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:21.513652    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:21.513665    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:21.525469    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:21.525483    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:21.540776    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:21.540786    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:22.916256    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:22.916479    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:22.931027    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:22.931113    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:22.942906    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:22.942996    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:22.953535    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:22.953623    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:22.964038    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:22.964114    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:22.982442    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:22.982513    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:22.995015    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:22.995102    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:23.005377    5073 logs.go:276] 0 containers: []
	W0930 04:11:23.005387    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:23.005451    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:23.015724    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:23.015740    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:23.015746    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:23.054815    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:23.054822    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:23.070103    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:23.070112    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:23.090106    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:23.090116    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:23.104955    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:23.104965    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:23.122094    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:23.122104    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:23.134858    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:23.134868    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:23.172653    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:23.172664    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:23.188303    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:23.188319    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:23.204882    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:23.204893    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:23.216596    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:23.216611    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:23.241218    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:23.241228    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:23.252894    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:23.252903    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:23.272323    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:23.272335    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:23.286415    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:23.286425    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:23.298415    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:23.298425    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:24.066039    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:25.824383    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:29.068317    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:29.068632    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:29.093525    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:29.093656    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:29.112136    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:29.112239    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:29.124746    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:29.124829    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:29.135681    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:29.135764    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:29.147169    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:29.147255    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:29.158062    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:29.158145    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:29.172925    4929 logs.go:276] 0 containers: []
	W0930 04:11:29.172938    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:29.173012    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:29.183636    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:29.183652    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:29.183657    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:29.188113    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:29.188121    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:29.200089    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:29.200098    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:29.211889    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:29.211899    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:29.228554    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:29.228565    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:29.242646    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:29.242656    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:29.264029    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:29.264039    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:29.275470    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:29.275483    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:29.301731    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:29.301748    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:29.335382    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:29.335391    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:29.372114    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:29.372125    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:29.384465    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:29.384476    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:29.399495    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:29.399511    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:29.411727    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:29.411739    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:29.424594    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:29.424604    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:31.944440    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:30.826617    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:30.826786    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:30.838501    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:30.838593    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:30.849737    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:30.849827    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:30.865835    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:30.865926    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:30.876669    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:30.876754    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:30.887921    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:30.888007    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:30.898393    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:30.898479    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:30.908427    5073 logs.go:276] 0 containers: []
	W0930 04:11:30.908438    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:30.908507    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:30.918916    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:30.918934    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:30.918940    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:30.923445    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:30.923451    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:30.948954    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:30.948967    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:30.960574    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:30.960589    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:30.978982    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:30.978998    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:31.017912    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:31.017930    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:31.052834    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:31.052850    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:31.064305    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:31.064316    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:31.081907    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:31.081917    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:31.093816    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:31.093824    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:31.115538    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:31.115545    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:31.129531    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:31.129546    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:31.149771    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:31.149781    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:31.161634    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:31.161645    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:31.173931    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:31.173940    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:31.188112    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:31.188124    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:33.701806    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:36.946648    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:36.946857    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:36.962884    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:36.962993    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:36.975427    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:36.975509    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:36.985966    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:36.986050    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:36.996400    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:36.996487    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:37.006766    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:37.006851    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:37.020061    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:37.020151    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:37.030349    4929 logs.go:276] 0 containers: []
	W0930 04:11:37.030359    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:37.030424    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:37.040742    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:37.040759    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:37.040764    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:37.052193    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:37.052203    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:37.067194    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:37.067204    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:37.102201    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:37.102215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:37.113813    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:37.113822    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:37.131544    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:37.131553    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:37.157123    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:37.157133    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:37.191771    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:37.191779    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:37.208948    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:37.208958    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:37.220324    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:37.220334    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:37.232170    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:37.232184    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:37.244098    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:37.244113    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:37.248477    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:37.248485    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:37.266286    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:37.266299    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:37.278723    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:37.278734    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:38.704621    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:38.705188    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:38.740517    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:38.740691    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:38.761834    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:38.761944    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:38.777521    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:38.777614    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:38.790537    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:38.790625    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:38.801358    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:38.801443    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:38.812344    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:38.812432    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:38.823255    5073 logs.go:276] 0 containers: []
	W0930 04:11:38.823268    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:38.823343    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:38.834286    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:38.834304    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:38.834310    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:38.847326    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:38.847336    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:38.853279    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:38.853288    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:38.892557    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:38.892568    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:38.907079    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:38.907088    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:38.922089    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:38.922100    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:38.933476    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:38.933488    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:38.971984    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:38.971991    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:38.997216    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:38.997227    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:39.009631    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:39.009642    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:39.021595    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:39.021606    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:39.044473    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:39.044481    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:39.056293    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:39.056309    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:39.070235    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:39.070245    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:39.088684    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:39.088695    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:39.108182    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:39.108196    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:39.792355    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:41.622628    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:44.794482    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:44.794657    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:44.806658    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:44.806744    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:44.817738    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:44.817827    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:44.828435    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:44.828519    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:44.839435    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:44.839516    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:44.850074    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:44.850156    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:44.860944    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:44.861018    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:44.871087    4929 logs.go:276] 0 containers: []
	W0930 04:11:44.871098    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:44.871172    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:44.882306    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:44.882322    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:44.882327    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:44.898807    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:44.898822    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:44.910655    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:44.910667    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:44.922514    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:44.922524    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:44.927149    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:44.927155    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:44.938809    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:44.938821    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:44.973834    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:44.973850    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:44.989443    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:44.989458    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:45.005051    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:45.005064    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:45.030207    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:45.030215    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:45.041389    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:45.041399    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:45.053820    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:45.053832    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:45.087371    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:45.087379    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:45.101045    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:45.101059    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:45.115833    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:45.115844    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:47.634724    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:46.625354    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:46.625652    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:46.648940    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:46.649063    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:46.665028    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:46.665125    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:46.678195    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:46.678276    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:46.689704    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:46.689789    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:46.699814    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:46.699897    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:46.710469    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:46.710543    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:46.721137    5073 logs.go:276] 0 containers: []
	W0930 04:11:46.721151    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:46.721220    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:46.732074    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:46.732090    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:46.732095    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:46.745646    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:46.745661    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:46.759809    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:46.759825    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:46.772452    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:46.772466    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:46.786643    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:46.786653    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:46.808990    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:46.808999    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:46.846540    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:46.846556    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:46.861553    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:46.861569    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:46.887228    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:46.887239    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:46.904127    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:46.904139    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:46.916670    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:46.916681    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:46.931227    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:46.931236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:46.948883    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:46.948892    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:46.960973    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:46.960984    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:46.965052    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:46.965061    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:47.005517    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:47.005528    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:49.519440    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:52.635576    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:52.635879    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:52.661387    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:11:52.661536    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:52.677402    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:11:52.677507    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:52.690146    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:11:52.690234    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:52.708497    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:11:52.708582    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:52.725578    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:11:52.725661    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:52.736307    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:11:52.736393    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:52.747031    4929 logs.go:276] 0 containers: []
	W0930 04:11:52.747042    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:52.747113    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:52.757077    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:11:52.757096    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:11:52.757102    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:11:52.769019    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:11:52.769029    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:11:52.787510    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:11:52.787521    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:52.799370    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:11:52.799383    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:11:52.811380    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:11:52.811390    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:11:52.826543    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:11:52.826554    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:11:52.838065    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:11:52.838076    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:11:52.850318    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:11:52.850331    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:11:52.863171    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:11:52.863183    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:11:52.875084    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:52.875100    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:52.909938    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:11:52.909952    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:11:52.927599    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:11:52.927613    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:11:52.941660    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:52.941674    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:52.946669    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:52.946677    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:52.985331    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:52.985345    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:54.521781    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:54.521873    5073 kubeadm.go:597] duration metric: took 4m3.362254917s to restartPrimaryControlPlane
	W0930 04:11:54.521939    5073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 04:11:54.521969    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0930 04:11:55.561439    5073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.039476291s)
	I0930 04:11:55.561518    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 04:11:55.566321    5073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:11:55.569014    5073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:11:55.571627    5073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:11:55.571634    5073 kubeadm.go:157] found existing configuration files:
	
	I0930 04:11:55.571662    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf
	I0930 04:11:55.574683    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:11:55.574715    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:11:55.577486    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf
	I0930 04:11:55.579778    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:11:55.579801    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:11:55.582827    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf
	I0930 04:11:55.585659    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:11:55.585684    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:11:55.588126    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf
	I0930 04:11:55.591120    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:11:55.591145    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:11:55.594085    5073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 04:11:55.610288    5073 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0930 04:11:55.610317    5073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 04:11:55.658032    5073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 04:11:55.658093    5073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 04:11:55.658142    5073 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 04:11:55.708736    5073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 04:11:55.712330    5073 out.go:235]   - Generating certificates and keys ...
	I0930 04:11:55.712365    5073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 04:11:55.712400    5073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 04:11:55.712504    5073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 04:11:55.712541    5073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 04:11:55.712626    5073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 04:11:55.712654    5073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 04:11:55.712681    5073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 04:11:55.712753    5073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 04:11:55.712804    5073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 04:11:55.712841    5073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 04:11:55.712902    5073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 04:11:55.712930    5073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 04:11:56.100578    5073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 04:11:56.654738    5073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 04:11:56.725664    5073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 04:11:56.861680    5073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 04:11:56.890354    5073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 04:11:56.890796    5073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 04:11:56.890829    5073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 04:11:56.975241    5073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 04:11:55.511312    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:56.978433    5073 out.go:235]   - Booting up control plane ...
	I0930 04:11:56.978484    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 04:11:56.978529    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 04:11:56.978564    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 04:11:56.978610    5073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 04:11:56.978698    5073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 04:12:00.980612    5073 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002680 seconds
	I0930 04:12:00.980779    5073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 04:12:00.983777    5073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 04:12:01.501002    5073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 04:12:01.501290    5073 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-312000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 04:12:02.005201    5073 kubeadm.go:310] [bootstrap-token] Using token: 0avxwc.umyj1qdkitmbz22p
	I0930 04:12:02.012414    5073 out.go:235]   - Configuring RBAC rules ...
	I0930 04:12:02.012467    5073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 04:12:02.012521    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 04:12:02.019101    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 04:12:02.020037    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 04:12:02.021064    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 04:12:02.021970    5073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 04:12:02.025157    5073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 04:12:02.215642    5073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 04:12:02.409521    5073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 04:12:02.410027    5073 kubeadm.go:310] 
	I0930 04:12:02.410056    5073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 04:12:02.410059    5073 kubeadm.go:310] 
	I0930 04:12:02.410092    5073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 04:12:02.410099    5073 kubeadm.go:310] 
	I0930 04:12:02.410117    5073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 04:12:02.410154    5073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 04:12:02.410184    5073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 04:12:02.410191    5073 kubeadm.go:310] 
	I0930 04:12:02.410215    5073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 04:12:02.410219    5073 kubeadm.go:310] 
	I0930 04:12:02.410245    5073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 04:12:02.410247    5073 kubeadm.go:310] 
	I0930 04:12:02.410278    5073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 04:12:02.410311    5073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 04:12:02.410349    5073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 04:12:02.410352    5073 kubeadm.go:310] 
	I0930 04:12:02.410398    5073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 04:12:02.410435    5073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 04:12:02.410440    5073 kubeadm.go:310] 
	I0930 04:12:02.410487    5073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0avxwc.umyj1qdkitmbz22p \
	I0930 04:12:02.410543    5073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d \
	I0930 04:12:02.410556    5073 kubeadm.go:310] 	--control-plane 
	I0930 04:12:02.410561    5073 kubeadm.go:310] 
	I0930 04:12:02.410598    5073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 04:12:02.410601    5073 kubeadm.go:310] 
	I0930 04:12:02.410635    5073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0avxwc.umyj1qdkitmbz22p \
	I0930 04:12:02.410682    5073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d 
	I0930 04:12:02.410901    5073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 04:12:02.410991    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:12:02.411003    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:12:02.414607    5073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 04:12:02.421553    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 04:12:02.424583    5073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 04:12:02.429357    5073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 04:12:02.429408    5073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 04:12:02.429424    5073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-312000 minikube.k8s.io/updated_at=2024_09_30T04_12_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=stopped-upgrade-312000 minikube.k8s.io/primary=true
	I0930 04:12:02.474703    5073 kubeadm.go:1113] duration metric: took 45.328917ms to wait for elevateKubeSystemPrivileges
	I0930 04:12:02.474721    5073 ops.go:34] apiserver oom_adj: -16
	I0930 04:12:02.474742    5073 kubeadm.go:394] duration metric: took 4m11.332896208s to StartCluster
	I0930 04:12:02.474756    5073 settings.go:142] acquiring lock: {Name:mk8d331f80592adde11c8565cba0670e3b2db485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:12:02.474856    5073 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:12:02.475272    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:12:02.475495    5073 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:12:02.475524    5073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 04:12:02.475608    5073 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-312000"
	I0930 04:12:02.475615    5073 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-312000"
	W0930 04:12:02.475619    5073 addons.go:243] addon storage-provisioner should already be in state true
	I0930 04:12:02.475620    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:12:02.475629    5073 host.go:66] Checking if "stopped-upgrade-312000" exists ...
	I0930 04:12:02.475639    5073 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-312000"
	I0930 04:12:02.475709    5073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-312000"
	I0930 04:12:02.475970    5073 retry.go:31] will retry after 689.330212ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/monitor: connect: connection refused
	I0930 04:12:02.476712    5073 kapi.go:59] client config for stopped-upgrade-312000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10662e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:12:02.476865    5073 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-312000"
	W0930 04:12:02.476870    5073 addons.go:243] addon default-storageclass should already be in state true
	I0930 04:12:02.476876    5073 host.go:66] Checking if "stopped-upgrade-312000" exists ...
	I0930 04:12:02.477432    5073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 04:12:02.477437    5073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 04:12:02.477442    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:12:02.481526    5073 out.go:177] * Verifying Kubernetes components...
	I0930 04:12:00.513541    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:00.513876    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:00.541654    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:00.541809    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:00.561188    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:00.561290    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:00.574934    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:00.575033    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:00.586088    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:00.586167    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:00.599643    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:00.599726    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:00.609925    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:00.610017    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:00.620147    4929 logs.go:276] 0 containers: []
	W0930 04:12:00.620159    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:00.620240    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:00.630648    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:00.630665    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:00.630671    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:00.670169    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:00.670180    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:00.682386    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:00.682398    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:00.700575    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:00.700585    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:00.712523    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:00.712534    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:00.725315    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:00.725324    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:00.737007    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:00.737022    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:00.749116    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:00.749128    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:00.762067    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:00.762082    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:00.787811    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:00.787827    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:00.804269    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:00.804281    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:00.841081    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:00.841096    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:00.846264    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:00.846277    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:00.862164    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:00.862178    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:00.881337    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:00.881350    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:02.487586    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:12:02.579041    5073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:12:02.584237    5073 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:12:02.584283    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:12:02.588308    5073 api_server.go:72] duration metric: took 112.803458ms to wait for apiserver process to appear ...
	I0930 04:12:02.588315    5073 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:12:02.588323    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:02.657507    5073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 04:12:02.960172    5073 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 04:12:02.960189    5073 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 04:12:03.171043    5073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:12:03.175122    5073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:12:03.175132    5073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 04:12:03.175148    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:12:03.217687    5073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:12:03.403636    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:07.590330    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:07.590381    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:08.404632    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:08.404818    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:08.419138    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:08.419222    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:08.429968    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:08.430053    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:08.443492    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:08.443574    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:08.453820    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:08.453905    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:08.464732    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:08.464826    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:08.475683    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:08.475771    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:08.487466    4929 logs.go:276] 0 containers: []
	W0930 04:12:08.487477    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:08.487548    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:08.497742    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:08.497762    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:08.497768    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:08.503599    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:08.503610    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:08.515644    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:08.515658    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:08.527683    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:08.527693    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:08.546984    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:08.546996    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:08.562673    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:08.562683    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:08.586080    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:08.586088    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:08.619089    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:08.619098    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:08.654309    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:08.654320    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:08.668330    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:08.668345    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:08.686318    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:08.686329    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:08.701255    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:08.701270    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:08.715468    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:08.715483    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:08.727413    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:08.727427    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:08.740671    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:08.740681    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:11.254346    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:12.590683    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:12.590738    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:16.255641    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:16.255807    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:16.268121    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:16.268210    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:16.279551    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:16.279638    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:16.294482    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:16.294569    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:16.305360    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:16.305442    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:16.315873    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:16.315960    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:16.330146    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:16.330226    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:16.341011    4929 logs.go:276] 0 containers: []
	W0930 04:12:16.341024    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:16.341101    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:16.352818    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:16.352836    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:16.352842    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:16.368262    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:16.368272    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:16.384053    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:16.384066    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:16.395264    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:16.395275    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:16.406846    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:16.406859    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:16.425546    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:16.425557    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:16.461580    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:16.461594    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:16.476412    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:16.476422    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:16.480968    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:16.480977    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:16.515686    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:16.515696    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:16.528009    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:16.528022    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:16.552542    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:16.552554    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:16.564752    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:16.564763    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:16.576716    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:16.576727    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:16.592327    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:16.592336    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:17.591100    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:17.591170    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:19.106172    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:22.591594    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:22.591653    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:24.108430    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:24.108649    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:24.129034    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:24.129128    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:24.147035    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:24.147122    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:24.158204    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:24.158288    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:24.168817    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:24.168893    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:24.180407    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:24.180493    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:24.191073    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:24.191152    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:24.201362    4929 logs.go:276] 0 containers: []
	W0930 04:12:24.201374    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:24.201445    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:24.211908    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:24.211928    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:24.211934    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:24.223576    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:24.223588    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:24.237201    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:24.237212    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:24.249541    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:24.249552    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:24.265335    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:24.265347    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:24.279498    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:24.279509    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:24.303210    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:24.303220    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:24.336787    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:24.336796    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:24.350969    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:24.350983    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:24.370981    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:24.370993    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:24.382626    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:24.382639    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:24.399216    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:24.399229    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:24.410794    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:24.410807    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:24.428453    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:24.428464    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:24.433450    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:24.433457    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:26.970726    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:27.592253    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:27.592287    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:32.593048    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:32.593077    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0930 04:12:32.961905    5073 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0930 04:12:32.965093    5073 out.go:177] * Enabled addons: storage-provisioner
	I0930 04:12:31.971096    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:31.971284    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:31.984027    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:31.984128    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:31.994411    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:31.994497    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:32.006122    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:32.006213    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:32.018228    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:32.018308    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:32.028744    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:32.028824    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:32.039306    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:32.039390    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:32.049487    4929 logs.go:276] 0 containers: []
	W0930 04:12:32.049503    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:32.049577    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:32.059781    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:32.059796    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:32.059802    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:32.071613    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:32.071622    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:32.087982    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:32.087996    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:32.106921    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:32.106931    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:32.111395    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:32.111402    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:32.122726    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:32.122736    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:32.146069    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:32.146079    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:32.157457    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:32.157467    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:32.171556    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:32.171566    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:32.206153    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:32.206169    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:32.221447    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:32.221459    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:32.233684    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:32.233693    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:32.245955    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:32.245965    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:32.257587    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:32.257596    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:32.269431    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:32.269442    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:32.984692    5073 addons.go:510] duration metric: took 30.509712792s for enable addons: enabled=[storage-provisioner]
	I0930 04:12:34.805679    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:37.594485    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:37.594541    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:39.807852    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:39.808131    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:12:39.827402    4929 logs.go:276] 1 containers: [c3591c3891b2]
	I0930 04:12:39.827523    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:12:39.841336    4929 logs.go:276] 1 containers: [c9f50b35a283]
	I0930 04:12:39.841431    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:12:39.853419    4929 logs.go:276] 4 containers: [86d6de14d3fe 1c58ad832f66 85a0a5385195 4b60eaea6a29]
	I0930 04:12:39.853521    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:12:39.864079    4929 logs.go:276] 1 containers: [7dc64314198d]
	I0930 04:12:39.864159    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:12:39.874892    4929 logs.go:276] 1 containers: [485edac7e4e9]
	I0930 04:12:39.874978    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:12:39.885671    4929 logs.go:276] 1 containers: [55b4e3fee39e]
	I0930 04:12:39.885754    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:12:39.896517    4929 logs.go:276] 0 containers: []
	W0930 04:12:39.896528    4929 logs.go:278] No container was found matching "kindnet"
	I0930 04:12:39.896600    4929 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:12:39.906781    4929 logs.go:276] 1 containers: [918371f0f495]
	I0930 04:12:39.906797    4929 logs.go:123] Gathering logs for coredns [86d6de14d3fe] ...
	I0930 04:12:39.906802    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d6de14d3fe"
	I0930 04:12:39.918274    4929 logs.go:123] Gathering logs for Docker ...
	I0930 04:12:39.918284    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:12:39.942165    4929 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:12:39.942174    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:12:39.978377    4929 logs.go:123] Gathering logs for coredns [1c58ad832f66] ...
	I0930 04:12:39.978387    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c58ad832f66"
	I0930 04:12:39.990606    4929 logs.go:123] Gathering logs for storage-provisioner [918371f0f495] ...
	I0930 04:12:39.990617    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 918371f0f495"
	I0930 04:12:40.002501    4929 logs.go:123] Gathering logs for kube-apiserver [c3591c3891b2] ...
	I0930 04:12:40.002512    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3591c3891b2"
	I0930 04:12:40.016532    4929 logs.go:123] Gathering logs for etcd [c9f50b35a283] ...
	I0930 04:12:40.016544    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9f50b35a283"
	I0930 04:12:40.031037    4929 logs.go:123] Gathering logs for kube-scheduler [7dc64314198d] ...
	I0930 04:12:40.031047    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dc64314198d"
	I0930 04:12:40.047423    4929 logs.go:123] Gathering logs for kube-proxy [485edac7e4e9] ...
	I0930 04:12:40.047436    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 485edac7e4e9"
	I0930 04:12:40.059039    4929 logs.go:123] Gathering logs for kubelet ...
	I0930 04:12:40.059050    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:12:40.093364    4929 logs.go:123] Gathering logs for coredns [85a0a5385195] ...
	I0930 04:12:40.093376    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a0a5385195"
	I0930 04:12:40.105123    4929 logs.go:123] Gathering logs for coredns [4b60eaea6a29] ...
	I0930 04:12:40.105139    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b60eaea6a29"
	I0930 04:12:40.116813    4929 logs.go:123] Gathering logs for kube-controller-manager [55b4e3fee39e] ...
	I0930 04:12:40.116825    4929 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55b4e3fee39e"
	I0930 04:12:40.133703    4929 logs.go:123] Gathering logs for container status ...
	I0930 04:12:40.133712    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:12:40.147810    4929 logs.go:123] Gathering logs for dmesg ...
	I0930 04:12:40.147820    4929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:12:42.654067    4929 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:42.596151    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:42.596199    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:47.656185    4929 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:47.661527    4929 out.go:201] 
	W0930 04:12:47.665338    4929 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0930 04:12:47.665344    4929 out.go:270] * 
	W0930 04:12:47.665818    4929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:12:47.671508    4929 out.go:201] 
	I0930 04:12:47.598046    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:47.598088    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:52.600221    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:52.600261    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:57.602518    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:57.602577    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-09-30 11:03:51 UTC, ends at Mon 2024-09-30 11:13:03 UTC. --
	Sep 30 11:12:47 running-upgrade-520000 dockerd[3257]: time="2024-09-30T11:12:47.865690024Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/05a16c84eb0f8aad30c0355fb8857935386a2e90d9f0a0d3932df485757e54fe pid=18858 runtime=io.containerd.runc.v2
	Sep 30 11:12:48 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:48Z" level=error msg="ContainerStats resp: {0x4000427ec0 linux}"
	Sep 30 11:12:48 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:48Z" level=error msg="ContainerStats resp: {0x4000864600 linux}"
	Sep 30 11:12:48 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 30 11:12:49 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:49Z" level=error msg="ContainerStats resp: {0x4000a28280 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x4000a28d80 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x4000a28ec0 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x40008c1a80 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x40008c1e80 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x4000a29dc0 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x400007f600 linux}"
	Sep 30 11:12:50 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:50Z" level=error msg="ContainerStats resp: {0x400007fc40 linux}"
	Sep 30 11:12:53 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 30 11:12:58 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:12:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 30 11:13:00 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:00Z" level=error msg="ContainerStats resp: {0x4000709ec0 linux}"
	Sep 30 11:13:00 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:00Z" level=error msg="ContainerStats resp: {0x4000904600 linux}"
	Sep 30 11:13:01 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:01Z" level=error msg="ContainerStats resp: {0x40008c06c0 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x4000a28f40 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x40008c12c0 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x40008c1740 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x40008c0740 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x40008c0ac0 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x4000a297c0 linux}"
	Sep 30 11:13:02 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:02Z" level=error msg="ContainerStats resp: {0x40008c1a80 linux}"
	Sep 30 11:13:03 running-upgrade-520000 cri-dockerd[3097]: time="2024-09-30T11:13:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	05a16c84eb0f8       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   e4c88b62f271c
	94264c0d118a8       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   ae6494987de7b
	86d6de14d3fe5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ae6494987de7b
	1c58ad832f667       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e4c88b62f271c
	485edac7e4e9f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   d2bcfdbb45d32
	918371f0f495d       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   dacc41691e6ed
	7dc64314198d0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e166d2d7e4718
	55b4e3fee39e5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   fae304c4a4d65
	c3591c3891b2e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   26401aa68fa58
	c9f50b35a283a       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   464839bf1a2a4
	
	
	==> coredns [05a16c84eb0f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7209704907749853876.7567418866307182865. HINFO: read udp 10.244.0.2:41409->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7209704907749853876.7567418866307182865. HINFO: read udp 10.244.0.2:35940->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7209704907749853876.7567418866307182865. HINFO: read udp 10.244.0.2:36397->10.0.2.3:53: i/o timeout
	
	
	==> coredns [1c58ad832f66] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:60427->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:37949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:57692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:40728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:45149->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:37284->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:48904->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:57661->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:37442->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 487686940839370912.5122351800231769126. HINFO: read udp 10.244.0.2:43239->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [86d6de14d3fe] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:46155->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:46336->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:55242->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:35551->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:48401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:40251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:56687->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:44231->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:41311->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8815466625533150142.2533356109349556821. HINFO: read udp 10.244.0.3:36897->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [94264c0d118a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2419743410796672339.4038758837009262052. HINFO: read udp 10.244.0.3:41522->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2419743410796672339.4038758837009262052. HINFO: read udp 10.244.0.3:46188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2419743410796672339.4038758837009262052. HINFO: read udp 10.244.0.3:41221->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-520000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-520000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=running-upgrade-520000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T04_08_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-520000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:13:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:08:46 +0000   Mon, 30 Sep 2024 11:08:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:08:46 +0000   Mon, 30 Sep 2024 11:08:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:08:46 +0000   Mon, 30 Sep 2024 11:08:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:08:46 +0000   Mon, 30 Sep 2024 11:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-520000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 869b72fe66584d2c89fe234af0d3d412
	  System UUID:                869b72fe66584d2c89fe234af0d3d412
	  Boot ID:                    5e9bc854-0e57-4b93-ae48-0fdece7e253e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-d6868                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-ph9mk                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-520000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-520000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-520000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-6l7zr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-520000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-520000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-520000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-520000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-520000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-520000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-520000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-520000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-520000 event: Registered Node running-upgrade-520000 in Controller
	
	
	==> dmesg <==
	[  +1.599914] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.091342] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.078611] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.146883] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088460] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.075770] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.558764] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +14.144789] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
	[  +2.779876] systemd-fstab-generator[2227]: Ignoring "noauto" for root device
	[  +0.144088] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.088625] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +0.095141] systemd-fstab-generator[2282]: Ignoring "noauto" for root device
	[  +2.423902] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.200988] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +0.082555] systemd-fstab-generator[3065]: Ignoring "noauto" for root device
	[  +0.081899] systemd-fstab-generator[3076]: Ignoring "noauto" for root device
	[  +0.093492] systemd-fstab-generator[3090]: Ignoring "noauto" for root device
	[  +2.289535] systemd-fstab-generator[3244]: Ignoring "noauto" for root device
	[  +4.640380] systemd-fstab-generator[3646]: Ignoring "noauto" for root device
	[  +0.978789] systemd-fstab-generator[3772]: Ignoring "noauto" for root device
	[ +19.267224] kauditd_printk_skb: 68 callbacks suppressed
	[Sep30 11:08] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.484425] systemd-fstab-generator[11963]: Ignoring "noauto" for root device
	[  +5.633478] systemd-fstab-generator[12564]: Ignoring "noauto" for root device
	[  +0.468812] systemd-fstab-generator[12695]: Ignoring "noauto" for root device
	
	
	==> etcd [c9f50b35a283] <==
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-30T11:08:41.889Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-30T11:08:41.891Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T11:08:42.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T11:08:42.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-30T11:08:42.085Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:08:42.091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:08:42.091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:08:42.091Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:08:42.091Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-520000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:08:42.091Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:08:42.093Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T11:08:42.093Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:08:42.093Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-30T11:08:42.094Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:08:42.094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:13:03 up 9 min,  0 users,  load average: 0.23, 0.42, 0.25
	Linux running-upgrade-520000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c3591c3891b2] <==
	I0930 11:08:43.732559       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0930 11:08:43.748223       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:08:43.748684       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0930 11:08:43.748789       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0930 11:08:43.749010       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:08:43.749038       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0930 11:08:43.749085       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0930 11:08:44.482484       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0930 11:08:44.656230       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0930 11:08:44.661770       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0930 11:08:44.661800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 11:08:44.826249       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 11:08:44.838707       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 11:08:44.920424       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0930 11:08:44.922531       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0930 11:08:44.922976       1 controller.go:611] quota admission added evaluator for: endpoints
	I0930 11:08:44.924366       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:08:45.806084       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0930 11:08:46.512276       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0930 11:08:46.515656       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0930 11:08:46.552890       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0930 11:08:46.563723       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:08:59.462524       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0930 11:08:59.562200       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0930 11:08:59.958612       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [55b4e3fee39e] <==
	I0930 11:08:58.670268       1 shared_informer.go:262] Caches are synced for persistent volume
	I0930 11:08:58.672404       1 shared_informer.go:262] Caches are synced for PV protection
	I0930 11:08:58.676914       1 shared_informer.go:262] Caches are synced for expand
	I0930 11:08:58.692392       1 shared_informer.go:262] Caches are synced for daemon sets
	I0930 11:08:58.706571       1 shared_informer.go:262] Caches are synced for ephemeral
	I0930 11:08:58.706575       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0930 11:08:58.711754       1 shared_informer.go:262] Caches are synced for job
	I0930 11:08:58.729784       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0930 11:08:58.734337       1 shared_informer.go:262] Caches are synced for cronjob
	I0930 11:08:58.760670       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0930 11:08:58.764897       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0930 11:08:58.776327       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0930 11:08:58.776349       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0930 11:08:58.807152       1 shared_informer.go:262] Caches are synced for deployment
	I0930 11:08:58.812319       1 shared_informer.go:262] Caches are synced for disruption
	I0930 11:08:58.812325       1 disruption.go:371] Sending events to api server.
	I0930 11:08:58.862344       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 11:08:58.883616       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 11:08:59.286629       1 shared_informer.go:262] Caches are synced for garbage collector
	I0930 11:08:59.288769       1 shared_informer.go:262] Caches are synced for garbage collector
	I0930 11:08:59.288778       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0930 11:08:59.465439       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6l7zr"
	I0930 11:08:59.563408       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0930 11:08:59.667674       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-ph9mk"
	I0930 11:08:59.675022       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-d6868"
	
	
	==> kube-proxy [485edac7e4e9] <==
	I0930 11:08:59.945104       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0930 11:08:59.945129       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0930 11:08:59.945239       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0930 11:08:59.956763       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0930 11:08:59.956776       1 server_others.go:206] "Using iptables Proxier"
	I0930 11:08:59.956789       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0930 11:08:59.956930       1 server.go:661] "Version info" version="v1.24.1"
	I0930 11:08:59.956957       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:08:59.957218       1 config.go:317] "Starting service config controller"
	I0930 11:08:59.957228       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0930 11:08:59.957238       1 config.go:226] "Starting endpoint slice config controller"
	I0930 11:08:59.957261       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0930 11:08:59.957666       1 config.go:444] "Starting node config controller"
	I0930 11:08:59.957689       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0930 11:09:00.057732       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0930 11:09:00.057757       1 shared_informer.go:262] Caches are synced for service config
	I0930 11:09:00.057977       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [7dc64314198d] <==
	W0930 11:08:43.712644       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:08:43.712960       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0930 11:08:43.712655       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:08:43.713003       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0930 11:08:43.712783       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 11:08:43.713035       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0930 11:08:43.712819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:08:43.713148       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0930 11:08:43.712473       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:08:43.713189       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0930 11:08:43.712462       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 11:08:43.713237       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0930 11:08:44.529290       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:08:44.529380       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0930 11:08:44.598540       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:08:44.598588       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0930 11:08:44.680202       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:08:44.680230       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0930 11:08:44.706341       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 11:08:44.706461       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0930 11:08:44.742452       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:08:44.742576       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0930 11:08:44.742615       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 11:08:44.742636       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0930 11:08:45.102095       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-09-30 11:03:51 UTC, ends at Mon 2024-09-30 11:13:04 UTC. --
	Sep 30 11:08:47 running-upgrade-520000 kubelet[12570]: I0930 11:08:47.541182   12570 apiserver.go:52] "Watching apiserver"
	Sep 30 11:08:47 running-upgrade-520000 kubelet[12570]: I0930 11:08:47.769951   12570 reconciler.go:157] "Reconciler: start to sync state"
	Sep 30 11:08:48 running-upgrade-520000 kubelet[12570]: E0930 11:08:48.146443   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-520000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-520000"
	Sep 30 11:08:48 running-upgrade-520000 kubelet[12570]: E0930 11:08:48.344691   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-520000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-520000"
	Sep 30 11:08:48 running-upgrade-520000 kubelet[12570]: E0930 11:08:48.544661   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-520000\" already exists" pod="kube-system/etcd-running-upgrade-520000"
	Sep 30 11:08:48 running-upgrade-520000 kubelet[12570]: I0930 11:08:48.742449   12570 request.go:601] Waited for 1.122534013s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 30 11:08:48 running-upgrade-520000 kubelet[12570]: E0930 11:08:48.746226   12570 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-520000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-520000"
	Sep 30 11:08:58 running-upgrade-520000 kubelet[12570]: I0930 11:08:58.662461   12570 topology_manager.go:200] "Topology Admit Handler"
	Sep 30 11:08:58 running-upgrade-520000 kubelet[12570]: I0930 11:08:58.753228   12570 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 30 11:08:58 running-upgrade-520000 kubelet[12570]: I0930 11:08:58.753622   12570 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 30 11:08:58 running-upgrade-520000 kubelet[12570]: I0930 11:08:58.854179   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b1b510dd-6911-4172-ac46-fa97c0093582-tmp\") pod \"storage-provisioner\" (UID: \"b1b510dd-6911-4172-ac46-fa97c0093582\") " pod="kube-system/storage-provisioner"
	Sep 30 11:08:58 running-upgrade-520000 kubelet[12570]: I0930 11:08:58.854207   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncwx\" (UniqueName: \"kubernetes.io/projected/b1b510dd-6911-4172-ac46-fa97c0093582-kube-api-access-2ncwx\") pod \"storage-provisioner\" (UID: \"b1b510dd-6911-4172-ac46-fa97c0093582\") " pod="kube-system/storage-provisioner"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.469844   12570 topology_manager.go:200] "Topology Admit Handler"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.660371   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrm6\" (UniqueName: \"kubernetes.io/projected/c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8-kube-api-access-mcrm6\") pod \"kube-proxy-6l7zr\" (UID: \"c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8\") " pod="kube-system/kube-proxy-6l7zr"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.660397   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8-kube-proxy\") pod \"kube-proxy-6l7zr\" (UID: \"c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8\") " pod="kube-system/kube-proxy-6l7zr"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.660411   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8-xtables-lock\") pod \"kube-proxy-6l7zr\" (UID: \"c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8\") " pod="kube-system/kube-proxy-6l7zr"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.660423   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8-lib-modules\") pod \"kube-proxy-6l7zr\" (UID: \"c37be933-4e09-4b7f-b0b2-dcb6a6dd44b8\") " pod="kube-system/kube-proxy-6l7zr"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.675279   12570 topology_manager.go:200] "Topology Admit Handler"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.685467   12570 topology_manager.go:200] "Topology Admit Handler"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.861217   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6r8q\" (UniqueName: \"kubernetes.io/projected/c4369520-53fe-4818-ad8f-f89f792f1ff8-kube-api-access-k6r8q\") pod \"coredns-6d4b75cb6d-ph9mk\" (UID: \"c4369520-53fe-4818-ad8f-f89f792f1ff8\") " pod="kube-system/coredns-6d4b75cb6d-ph9mk"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.861343   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fbe9b0-4477-45d0-a63c-93f3c86e2314-config-volume\") pod \"coredns-6d4b75cb6d-d6868\" (UID: \"06fbe9b0-4477-45d0-a63c-93f3c86e2314\") " pod="kube-system/coredns-6d4b75cb6d-d6868"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.861357   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvzg\" (UniqueName: \"kubernetes.io/projected/06fbe9b0-4477-45d0-a63c-93f3c86e2314-kube-api-access-2qvzg\") pod \"coredns-6d4b75cb6d-d6868\" (UID: \"06fbe9b0-4477-45d0-a63c-93f3c86e2314\") " pod="kube-system/coredns-6d4b75cb6d-d6868"
	Sep 30 11:08:59 running-upgrade-520000 kubelet[12570]: I0930 11:08:59.861368   12570 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4369520-53fe-4818-ad8f-f89f792f1ff8-config-volume\") pod \"coredns-6d4b75cb6d-ph9mk\" (UID: \"c4369520-53fe-4818-ad8f-f89f792f1ff8\") " pod="kube-system/coredns-6d4b75cb6d-ph9mk"
	Sep 30 11:12:48 running-upgrade-520000 kubelet[12570]: I0930 11:12:48.018365   12570 scope.go:110] "RemoveContainer" containerID="85a0a5385195c87daba33d21d398c0f9c09a729209d7e59e69da7c98b4197aad"
	Sep 30 11:12:48 running-upgrade-520000 kubelet[12570]: I0930 11:12:48.033373   12570 scope.go:110] "RemoveContainer" containerID="4b60eaea6a29343b72bedd7a332048aa228091e2ae451bbccbdfc0100fc5537e"
	
	
	==> storage-provisioner [918371f0f495] <==
	I0930 11:08:59.179594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 11:08:59.185859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 11:08:59.185879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 11:08:59.188571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 11:08:59.188722       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-520000_e712aae7-97dc-45c2-8547-406698ac4d4e!
	I0930 11:08:59.188762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ed49b40-5162-48af-a423-df9552e7a5cb", APIVersion:"v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-520000_e712aae7-97dc-45c2-8547-406698ac4d4e became leader
	I0930 11:08:59.288990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-520000_e712aae7-97dc-45c2-8547-406698ac4d4e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-520000 -n running-upgrade-520000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-520000 -n running-upgrade-520000: exit status 2 (15.772741375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-520000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-520000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-520000: (1.324475209s)
--- FAIL: TestRunningBinaryUpgrade (622.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.044945375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-925000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-925000" primary control-plane node in "kubernetes-upgrade-925000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:05:58.773218    4992 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:05:58.773364    4992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:05:58.773367    4992 out.go:358] Setting ErrFile to fd 2...
	I0930 04:05:58.773370    4992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:05:58.773513    4992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:05:58.774648    4992 out.go:352] Setting JSON to false
	I0930 04:05:58.791313    4992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3921,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:05:58.791391    4992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:05:58.795276    4992 out.go:177] * [kubernetes-upgrade-925000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:05:58.802979    4992 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:05:58.803006    4992 notify.go:220] Checking for updates...
	I0930 04:05:58.809921    4992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:05:58.812961    4992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:05:58.816000    4992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:05:58.818967    4992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:05:58.821943    4992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:05:58.825366    4992 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:05:58.825435    4992 config.go:182] Loaded profile config "running-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:05:58.825484    4992 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:05:58.829965    4992 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:05:58.835212    4992 start.go:297] selected driver: qemu2
	I0930 04:05:58.835217    4992 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:05:58.835227    4992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:05:58.837450    4992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:05:58.841007    4992 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:05:58.843982    4992 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 04:05:58.843995    4992 cni.go:84] Creating CNI manager for ""
	I0930 04:05:58.844012    4992 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 04:05:58.844041    4992 start.go:340] cluster config:
	{Name:kubernetes-upgrade-925000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:05:58.847406    4992 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:05:58.855013    4992 out.go:177] * Starting "kubernetes-upgrade-925000" primary control-plane node in "kubernetes-upgrade-925000" cluster
	I0930 04:05:58.858925    4992 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 04:05:58.858938    4992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 04:05:58.858947    4992 cache.go:56] Caching tarball of preloaded images
	I0930 04:05:58.858998    4992 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:05:58.859003    4992 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0930 04:05:58.859065    4992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kubernetes-upgrade-925000/config.json ...
	I0930 04:05:58.859075    4992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kubernetes-upgrade-925000/config.json: {Name:mkc7a926829f051d94e8a96f8573cd2e5dc3a7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:05:58.859462    4992 start.go:360] acquireMachinesLock for kubernetes-upgrade-925000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:05:58.859495    4992 start.go:364] duration metric: took 26.209µs to acquireMachinesLock for "kubernetes-upgrade-925000"
	I0930 04:05:58.859506    4992 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:05:58.859536    4992 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:05:58.866936    4992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:05:58.883425    4992 start.go:159] libmachine.API.Create for "kubernetes-upgrade-925000" (driver="qemu2")
	I0930 04:05:58.883452    4992 client.go:168] LocalClient.Create starting
	I0930 04:05:58.883522    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:05:58.883556    4992 main.go:141] libmachine: Decoding PEM data...
	I0930 04:05:58.883565    4992 main.go:141] libmachine: Parsing certificate...
	I0930 04:05:58.883612    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:05:58.883635    4992 main.go:141] libmachine: Decoding PEM data...
	I0930 04:05:58.883646    4992 main.go:141] libmachine: Parsing certificate...
	I0930 04:05:58.884120    4992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:05:59.056762    4992 main.go:141] libmachine: Creating SSH key...
	I0930 04:05:59.205156    4992 main.go:141] libmachine: Creating Disk image...
	I0930 04:05:59.205169    4992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:05:59.205423    4992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:05:59.215296    4992 main.go:141] libmachine: STDOUT: 
	I0930 04:05:59.215318    4992 main.go:141] libmachine: STDERR: 
	I0930 04:05:59.215387    4992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2 +20000M
	I0930 04:05:59.223513    4992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:05:59.223529    4992 main.go:141] libmachine: STDERR: 
	I0930 04:05:59.223549    4992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:05:59.223554    4992 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:05:59.223567    4992 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:05:59.223598    4992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:d4:26:28:74:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:05:59.225346    4992 main.go:141] libmachine: STDOUT: 
	I0930 04:05:59.225362    4992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:05:59.225384    4992 client.go:171] duration metric: took 341.931042ms to LocalClient.Create
	I0930 04:06:01.227477    4992 start.go:128] duration metric: took 2.367965834s to createHost
	I0930 04:06:01.227498    4992 start.go:83] releasing machines lock for "kubernetes-upgrade-925000", held for 2.368032584s
	W0930 04:06:01.227515    4992 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:06:01.238449    4992 out.go:177] * Deleting "kubernetes-upgrade-925000" in qemu2 ...
	W0930 04:06:01.250051    4992 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:06:01.250057    4992 start.go:729] Will try again in 5 seconds ...
	I0930 04:06:06.252177    4992 start.go:360] acquireMachinesLock for kubernetes-upgrade-925000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:06:06.252783    4992 start.go:364] duration metric: took 500.375µs to acquireMachinesLock for "kubernetes-upgrade-925000"
	I0930 04:06:06.252943    4992 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:06:06.253255    4992 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:06:06.264873    4992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:06:06.316995    4992 start.go:159] libmachine.API.Create for "kubernetes-upgrade-925000" (driver="qemu2")
	I0930 04:06:06.317083    4992 client.go:168] LocalClient.Create starting
	I0930 04:06:06.317239    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:06:06.317312    4992 main.go:141] libmachine: Decoding PEM data...
	I0930 04:06:06.317328    4992 main.go:141] libmachine: Parsing certificate...
	I0930 04:06:06.317389    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:06:06.317434    4992 main.go:141] libmachine: Decoding PEM data...
	I0930 04:06:06.317451    4992 main.go:141] libmachine: Parsing certificate...
	I0930 04:06:06.317982    4992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:06:06.492446    4992 main.go:141] libmachine: Creating SSH key...
	I0930 04:06:06.730320    4992 main.go:141] libmachine: Creating Disk image...
	I0930 04:06:06.730331    4992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:06:06.730611    4992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:06:06.740743    4992 main.go:141] libmachine: STDOUT: 
	I0930 04:06:06.740766    4992 main.go:141] libmachine: STDERR: 
	I0930 04:06:06.740831    4992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2 +20000M
	I0930 04:06:06.749307    4992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:06:06.749327    4992 main.go:141] libmachine: STDERR: 
	I0930 04:06:06.749341    4992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:06:06.749346    4992 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:06:06.749360    4992 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:06:06.749402    4992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b8:58:e5:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:06:06.751223    4992 main.go:141] libmachine: STDOUT: 
	I0930 04:06:06.751246    4992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:06:06.751260    4992 client.go:171] duration metric: took 434.166792ms to LocalClient.Create
	I0930 04:06:08.753412    4992 start.go:128] duration metric: took 2.500158666s to createHost
	I0930 04:06:08.753484    4992 start.go:83] releasing machines lock for "kubernetes-upgrade-925000", held for 2.500707375s
	W0930 04:06:08.753781    4992 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:06:08.763165    4992 out.go:201] 
	W0930 04:06:08.768482    4992 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:06:08.768528    4992 out.go:270] * 
	* 
	W0930 04:06:08.769824    4992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:06:08.780308    4992 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-925000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-925000: (3.453342084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-925000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-925000 status --format={{.Host}}: exit status 7 (62.005042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181213583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-925000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-925000" primary control-plane node in "kubernetes-upgrade-925000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-925000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:06:12.337445    5030 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:06:12.337591    5030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:06:12.337595    5030 out.go:358] Setting ErrFile to fd 2...
	I0930 04:06:12.337597    5030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:06:12.337734    5030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:06:12.338697    5030 out.go:352] Setting JSON to false
	I0930 04:06:12.355023    5030 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3935,"bootTime":1727690437,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:06:12.355092    5030 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:06:12.360422    5030 out.go:177] * [kubernetes-upgrade-925000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:06:12.363325    5030 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:06:12.363399    5030 notify.go:220] Checking for updates...
	I0930 04:06:12.370289    5030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:06:12.373266    5030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:06:12.376315    5030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:06:12.377873    5030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:06:12.381295    5030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:06:12.384625    5030 config.go:182] Loaded profile config "kubernetes-upgrade-925000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0930 04:06:12.384881    5030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:06:12.389162    5030 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:06:12.396322    5030 start.go:297] selected driver: qemu2
	I0930 04:06:12.396327    5030 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-925000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:06:12.396378    5030 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:06:12.398656    5030 cni.go:84] Creating CNI manager for ""
	I0930 04:06:12.398690    5030 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:06:12.398720    5030 start.go:340] cluster config:
	{Name:kubernetes-upgrade-925000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-925000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:06:12.402276    5030 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:06:12.409370    5030 out.go:177] * Starting "kubernetes-upgrade-925000" primary control-plane node in "kubernetes-upgrade-925000" cluster
	I0930 04:06:12.413303    5030 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:06:12.413319    5030 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:06:12.413330    5030 cache.go:56] Caching tarball of preloaded images
	I0930 04:06:12.413391    5030 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:06:12.413396    5030 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:06:12.413445    5030 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kubernetes-upgrade-925000/config.json ...
	I0930 04:06:12.413995    5030 start.go:360] acquireMachinesLock for kubernetes-upgrade-925000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:06:12.414024    5030 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "kubernetes-upgrade-925000"
	I0930 04:06:12.414032    5030 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:06:12.414036    5030 fix.go:54] fixHost starting: 
	I0930 04:06:12.414150    5030 fix.go:112] recreateIfNeeded on kubernetes-upgrade-925000: state=Stopped err=<nil>
	W0930 04:06:12.414158    5030 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:06:12.422346    5030 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-925000" ...
	I0930 04:06:12.426282    5030 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:06:12.426324    5030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b8:58:e5:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:06:12.428336    5030 main.go:141] libmachine: STDOUT: 
	I0930 04:06:12.428354    5030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:06:12.428390    5030 fix.go:56] duration metric: took 14.353ms for fixHost
	I0930 04:06:12.428395    5030 start.go:83] releasing machines lock for "kubernetes-upgrade-925000", held for 14.367292ms
	W0930 04:06:12.428401    5030 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:06:12.428428    5030 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:06:12.428433    5030 start.go:729] Will try again in 5 seconds ...
	I0930 04:06:17.430649    5030 start.go:360] acquireMachinesLock for kubernetes-upgrade-925000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:06:17.431145    5030 start.go:364] duration metric: took 368.208µs to acquireMachinesLock for "kubernetes-upgrade-925000"
	I0930 04:06:17.431236    5030 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:06:17.431256    5030 fix.go:54] fixHost starting: 
	I0930 04:06:17.431999    5030 fix.go:112] recreateIfNeeded on kubernetes-upgrade-925000: state=Stopped err=<nil>
	W0930 04:06:17.432025    5030 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:06:17.436681    5030 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-925000" ...
	I0930 04:06:17.443449    5030 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:06:17.443669    5030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b8:58:e5:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubernetes-upgrade-925000/disk.qcow2
	I0930 04:06:17.453775    5030 main.go:141] libmachine: STDOUT: 
	I0930 04:06:17.453864    5030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:06:17.453948    5030 fix.go:56] duration metric: took 22.693833ms for fixHost
	I0930 04:06:17.453971    5030 start.go:83] releasing machines lock for "kubernetes-upgrade-925000", held for 22.799917ms
	W0930 04:06:17.454268    5030 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-925000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:06:17.461368    5030 out.go:201] 
	W0930 04:06:17.464549    5030 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:06:17.464574    5030 out.go:270] * 
	* 
	W0930 04:06:17.467338    5030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:06:17.474458    5030 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-925000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-925000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-925000 version --output=json: exit status 1 (64.880625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-925000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-30 04:06:17.55455 -0700 PDT m=+2773.271045168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-925000 -n kubernetes-upgrade-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-925000 -n kubernetes-upgrade-925000: exit status 7 (33.690625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-925000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-925000
--- FAIL: TestKubernetesUpgrade (18.92s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19734
- KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3582255426/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19734
- KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1195298436/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (581.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2378131278 start -p stopped-upgrade-312000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2378131278 start -p stopped-upgrade-312000 --memory=2200 --vm-driver=qemu2 : (45.8490465s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2378131278 -p stopped-upgrade-312000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2378131278 -p stopped-upgrade-312000 stop: (12.119145375s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-312000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0930 04:10:18.238630    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 04:10:32.389096    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-312000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.93823025s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-312000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-312000" primary control-plane node in "stopped-upgrade-312000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-312000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:07:20.696322    5073 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:07:20.696490    5073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:07:20.696494    5073 out.go:358] Setting ErrFile to fd 2...
	I0930 04:07:20.696497    5073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:07:20.696668    5073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:07:20.697972    5073 out.go:352] Setting JSON to false
	I0930 04:07:20.717383    5073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4003,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:07:20.717479    5073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:07:20.722442    5073 out.go:177] * [stopped-upgrade-312000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:07:20.728355    5073 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:07:20.728460    5073 notify.go:220] Checking for updates...
	I0930 04:07:20.736293    5073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:07:20.739336    5073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:07:20.742342    5073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:07:20.745411    5073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:07:20.748337    5073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:07:20.751603    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:07:20.755289    5073 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 04:07:20.758291    5073 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:07:20.762344    5073 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:07:20.769341    5073 start.go:297] selected driver: qemu2
	I0930 04:07:20.769347    5073 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:20.769415    5073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:07:20.772211    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:07:20.772244    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:07:20.772270    5073 start.go:340] cluster config:
	{Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:20.772326    5073 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:07:20.783828    5073 out.go:177] * Starting "stopped-upgrade-312000" primary control-plane node in "stopped-upgrade-312000" cluster
	I0930 04:07:20.788356    5073 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:07:20.788372    5073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0930 04:07:20.788384    5073 cache.go:56] Caching tarball of preloaded images
	I0930 04:07:20.788455    5073 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:07:20.788462    5073 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0930 04:07:20.788521    5073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/config.json ...
	I0930 04:07:20.789031    5073 start.go:360] acquireMachinesLock for stopped-upgrade-312000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:07:20.789072    5073 start.go:364] duration metric: took 33.708µs to acquireMachinesLock for "stopped-upgrade-312000"
	I0930 04:07:20.789081    5073 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:07:20.789087    5073 fix.go:54] fixHost starting: 
	I0930 04:07:20.789208    5073 fix.go:112] recreateIfNeeded on stopped-upgrade-312000: state=Stopped err=<nil>
	W0930 04:07:20.789221    5073 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:07:20.797314    5073 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-312000" ...
	I0930 04:07:20.801289    5073 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:07:20.801373    5073 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50456-:22,hostfwd=tcp::50457-:2376,hostname=stopped-upgrade-312000 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/disk.qcow2
	I0930 04:07:20.847913    5073 main.go:141] libmachine: STDOUT: 
	I0930 04:07:20.847942    5073 main.go:141] libmachine: STDERR: 
	I0930 04:07:20.847950    5073 main.go:141] libmachine: Waiting for VM to start (ssh -p 50456 docker@127.0.0.1)...
	I0930 04:07:41.096231    5073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/config.json ...
	I0930 04:07:41.097219    5073 machine.go:93] provisionDockerMachine start ...
	I0930 04:07:41.097402    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.097844    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.097858    5073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 04:07:41.197917    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 04:07:41.197954    5073 buildroot.go:166] provisioning hostname "stopped-upgrade-312000"
	I0930 04:07:41.198103    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.198355    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.198367    5073 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-312000 && echo "stopped-upgrade-312000" | sudo tee /etc/hostname
	I0930 04:07:41.286376    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-312000
	
	I0930 04:07:41.286463    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.286627    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.286640    5073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-312000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-312000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-312000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 04:07:41.368395    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 04:07:41.368411    5073 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19734-1406/.minikube CaCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19734-1406/.minikube}
	I0930 04:07:41.368420    5073 buildroot.go:174] setting up certificates
	I0930 04:07:41.368425    5073 provision.go:84] configureAuth start
	I0930 04:07:41.368436    5073 provision.go:143] copyHostCerts
	I0930 04:07:41.368529    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem, removing ...
	I0930 04:07:41.368540    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem
	I0930 04:07:41.368698    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.pem (1078 bytes)
	I0930 04:07:41.368927    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem, removing ...
	I0930 04:07:41.368931    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem
	I0930 04:07:41.368995    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/cert.pem (1123 bytes)
	I0930 04:07:41.369137    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem, removing ...
	I0930 04:07:41.369142    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem
	I0930 04:07:41.369200    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19734-1406/.minikube/key.pem (1675 bytes)
	I0930 04:07:41.369304    5073 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-312000 san=[127.0.0.1 localhost minikube stopped-upgrade-312000]
	I0930 04:07:41.486998    5073 provision.go:177] copyRemoteCerts
	I0930 04:07:41.487051    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 04:07:41.487064    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:41.526214    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 04:07:41.533608    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 04:07:41.541189    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 04:07:41.548431    5073 provision.go:87] duration metric: took 179.993458ms to configureAuth
	I0930 04:07:41.548440    5073 buildroot.go:189] setting minikube options for container-runtime
	I0930 04:07:41.548548    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:07:41.548587    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.548682    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.548687    5073 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 04:07:41.620736    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0930 04:07:41.620747    5073 buildroot.go:70] root file system type: tmpfs
	I0930 04:07:41.620803    5073 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 04:07:41.620864    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.620973    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.621006    5073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 04:07:41.696187    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 04:07:41.696247    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:41.696354    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:41.696363    5073 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 04:07:42.070574    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0930 04:07:42.070587    5073 machine.go:96] duration metric: took 973.367625ms to provisionDockerMachine
	I0930 04:07:42.070598    5073 start.go:293] postStartSetup for "stopped-upgrade-312000" (driver="qemu2")
	I0930 04:07:42.070606    5073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 04:07:42.070666    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 04:07:42.070677    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:42.109675    5073 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 04:07:42.110987    5073 info.go:137] Remote host: Buildroot 2021.02.12
	I0930 04:07:42.110996    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/addons for local assets ...
	I0930 04:07:42.111081    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19734-1406/.minikube/files for local assets ...
	I0930 04:07:42.111217    5073 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem -> 19292.pem in /etc/ssl/certs
	I0930 04:07:42.111356    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 04:07:42.113966    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:07:42.121166    5073 start.go:296] duration metric: took 50.56325ms for postStartSetup
	I0930 04:07:42.121182    5073 fix.go:56] duration metric: took 21.332401208s for fixHost
	I0930 04:07:42.121220    5073 main.go:141] libmachine: Using SSH client type: native
	I0930 04:07:42.121323    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105055c00] 0x105058440 <nil>  [] 0s} localhost 50456 <nil> <nil>}
	I0930 04:07:42.121328    5073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 04:07:42.195167    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694462.585220379
	
	I0930 04:07:42.195178    5073 fix.go:216] guest clock: 1727694462.585220379
	I0930 04:07:42.195182    5073 fix.go:229] Guest: 2024-09-30 04:07:42.585220379 -0700 PDT Remote: 2024-09-30 04:07:42.121183 -0700 PDT m=+21.455551626 (delta=464.037379ms)
	I0930 04:07:42.195198    5073 fix.go:200] guest clock delta is within tolerance: 464.037379ms
	I0930 04:07:42.195202    5073 start.go:83] releasing machines lock for "stopped-upgrade-312000", held for 21.406430709s
	I0930 04:07:42.195276    5073 ssh_runner.go:195] Run: cat /version.json
	I0930 04:07:42.195280    5073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 04:07:42.195288    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:07:42.195298    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	W0930 04:07:42.195997    5073 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50456: connect: connection refused
	I0930 04:07:42.196012    5073 retry.go:31] will retry after 130.303222ms: dial tcp [::1]:50456: connect: connection refused
	W0930 04:07:42.233383    5073 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0930 04:07:42.233431    5073 ssh_runner.go:195] Run: systemctl --version
	I0930 04:07:42.235099    5073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 04:07:42.236689    5073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 04:07:42.236723    5073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0930 04:07:42.239616    5073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0930 04:07:42.243870    5073 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 04:07:42.243877    5073 start.go:495] detecting cgroup driver to use...
	I0930 04:07:42.243962    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:07:42.250387    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0930 04:07:42.253766    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 04:07:42.257222    5073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 04:07:42.257247    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 04:07:42.260682    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:07:42.263569    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 04:07:42.266405    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 04:07:42.269738    5073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 04:07:42.273175    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 04:07:42.276497    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 04:07:42.279323    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 04:07:42.282290    5073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 04:07:42.285443    5073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 04:07:42.288592    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:42.372239    5073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 04:07:42.382116    5073 start.go:495] detecting cgroup driver to use...
	I0930 04:07:42.382202    5073 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 04:07:42.388689    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:07:42.393368    5073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 04:07:42.400035    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 04:07:42.446736    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 04:07:42.452191    5073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 04:07:42.517490    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 04:07:42.523826    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 04:07:42.529948    5073 ssh_runner.go:195] Run: which cri-dockerd
	I0930 04:07:42.531442    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 04:07:42.534409    5073 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0930 04:07:42.539370    5073 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 04:07:42.617587    5073 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 04:07:42.695821    5073 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 04:07:42.695884    5073 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 04:07:42.701122    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:42.763509    5073 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:07:43.884429    5073 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.120920166s)
	I0930 04:07:43.884501    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 04:07:43.889455    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:07:43.894440    5073 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 04:07:43.979387    5073 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 04:07:44.061450    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:44.151363    5073 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 04:07:44.157003    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 04:07:44.161497    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:44.241891    5073 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 04:07:44.279835    5073 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 04:07:44.279932    5073 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 04:07:44.281990    5073 start.go:563] Will wait 60s for crictl version
	I0930 04:07:44.282048    5073 ssh_runner.go:195] Run: which crictl
	I0930 04:07:44.283552    5073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 04:07:44.297533    5073 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0930 04:07:44.297617    5073 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:07:44.313906    5073 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 04:07:44.333904    5073 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0930 04:07:44.334053    5073 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0930 04:07:44.335322    5073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 04:07:44.338877    5073 kubeadm.go:883] updating cluster {Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0930 04:07:44.338931    5073 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0930 04:07:44.338982    5073 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:07:44.349415    5073 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:07:44.349426    5073 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:07:44.349487    5073 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:07:44.353047    5073 ssh_runner.go:195] Run: which lz4
	I0930 04:07:44.354349    5073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 04:07:44.355583    5073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 04:07:44.355594    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0930 04:07:45.260515    5073 docker.go:649] duration metric: took 906.219083ms to copy over tarball
	I0930 04:07:45.260586    5073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 04:07:46.431632    5073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171048583s)
	I0930 04:07:46.431646    5073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 04:07:46.447546    5073 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0930 04:07:46.450503    5073 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0930 04:07:46.455518    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:46.532148    5073 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 04:07:47.981852    5073 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.449705917s)
	I0930 04:07:47.981971    5073 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 04:07:47.992608    5073 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 04:07:47.992616    5073 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0930 04:07:47.992622    5073 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 04:07:47.996662    5073 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:47.998644    5073 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:48.000758    5073 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:48.000921    5073 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:48.003024    5073 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:48.003117    5073 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:48.004577    5073 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:48.004780    5073 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:48.005283    5073 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:48.005875    5073 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 04:07:48.007064    5073 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:48.007799    5073 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:48.008298    5073 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:48.008505    5073 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 04:07:48.009429    5073 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:48.010203    5073 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:49.906595    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.934341    5073 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0930 04:07:49.934392    5073 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.934509    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0930 04:07:49.952937    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0930 04:07:50.011098    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0930 04:07:50.029142    5073 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0930 04:07:50.029166    5073 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0930 04:07:50.029256    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0930 04:07:50.044495    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.044850    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0930 04:07:50.044981    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0930 04:07:50.059309    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0930 04:07:50.059338    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0930 04:07:50.059396    5073 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0930 04:07:50.059415    5073 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.059472    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0930 04:07:50.067935    5073 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0930 04:07:50.067951    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0930 04:07:50.073064    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0930 04:07:50.087346    5073 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0930 04:07:50.087495    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.106172    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0930 04:07:50.106215    5073 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0930 04:07:50.106232    5073 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.106296    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 04:07:50.116444    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 04:07:50.116585    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:07:50.118084    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0930 04:07:50.118103    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0930 04:07:50.161857    5073 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0930 04:07:50.161872    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0930 04:07:50.195590    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0930 04:07:50.343561    5073 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0930 04:07:50.343844    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.369089    5073 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0930 04:07:50.369120    5073 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.369217    5073 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:07:50.387344    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0930 04:07:50.387491    5073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:07:50.388950    5073 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0930 04:07:50.388962    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0930 04:07:50.418927    5073 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0930 04:07:50.418939    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0930 04:07:50.548161    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.552866    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.603455    5073 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660337    5073 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0930 04:07:50.660376    5073 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0930 04:07:50.660383    5073 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0930 04:07:50.660397    5073 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.660398    5073 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.660417    5073 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0930 04:07:50.660431    5073 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660467    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0930 04:07:50.660468    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0930 04:07:50.660468    5073 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0930 04:07:50.685639    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0930 04:07:50.685655    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0930 04:07:50.685867    5073 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0930 04:07:50.685890    5073 cache_images.go:92] duration metric: took 2.693300375s to LoadCachedImages
	W0930 04:07:50.685921    5073 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0930 04:07:50.685927    5073 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0930 04:07:50.685981    5073 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-312000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 04:07:50.686046    5073 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 04:07:50.699093    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:07:50.699109    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:07:50.699122    5073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 04:07:50.699133    5073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-312000 NodeName:stopped-upgrade-312000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 04:07:50.699202    5073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-312000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 04:07:50.699260    5073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0930 04:07:50.701938    5073 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 04:07:50.701973    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 04:07:50.704554    5073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0930 04:07:50.709367    5073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 04:07:50.714063    5073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0930 04:07:50.719054    5073 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0930 04:07:50.720197    5073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 04:07:50.723839    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:07:50.800548    5073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:07:50.806021    5073 certs.go:68] Setting up /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000 for IP: 10.0.2.15
	I0930 04:07:50.806032    5073 certs.go:194] generating shared ca certs ...
	I0930 04:07:50.806041    5073 certs.go:226] acquiring lock for ca certs: {Name:mkeec9701f93539137211ace80b844b19e48dcd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:50.806213    5073 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key
	I0930 04:07:50.806266    5073 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key
	I0930 04:07:50.806272    5073 certs.go:256] generating profile certs ...
	I0930 04:07:50.806354    5073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key
	I0930 04:07:50.806370    5073 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac
	I0930 04:07:50.806381    5073 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0930 04:07:51.028628    5073 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac ...
	I0930 04:07:51.028646    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac: {Name:mk603770b4713bd35f9a58d5d4f9414c2f89c7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.029000    5073 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac ...
	I0930 04:07:51.029010    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac: {Name:mkf2616396a7a904def419dd7c8e7f7c1e845d78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.029158    5073 certs.go:381] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt.3f7403ac -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt
	I0930 04:07:51.029325    5073 certs.go:385] copying /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key.3f7403ac -> /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key
	I0930 04:07:51.032144    5073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.key
	I0930 04:07:51.032313    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem (1338 bytes)
	W0930 04:07:51.032343    5073 certs.go:480] ignoring /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929_empty.pem, impossibly tiny 0 bytes
	I0930 04:07:51.032350    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 04:07:51.032371    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem (1078 bytes)
	I0930 04:07:51.032392    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem (1123 bytes)
	I0930 04:07:51.032409    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/key.pem (1675 bytes)
	I0930 04:07:51.032453    5073 certs.go:484] found cert: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem (1708 bytes)
	I0930 04:07:51.032837    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 04:07:51.039878    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 04:07:51.047074    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 04:07:51.053559    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 04:07:51.060694    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 04:07:51.067496    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 04:07:51.074021    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 04:07:51.081176    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 04:07:51.088329    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/ssl/certs/19292.pem --> /usr/share/ca-certificates/19292.pem (1708 bytes)
	I0930 04:07:51.095226    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 04:07:51.101817    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/1929.pem --> /usr/share/ca-certificates/1929.pem (1338 bytes)
	I0930 04:07:51.109275    5073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 04:07:51.115113    5073 ssh_runner.go:195] Run: openssl version
	I0930 04:07:51.117152    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 04:07:51.120722    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.122145    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.122170    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 04:07:51.123991    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 04:07:51.126753    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1929.pem && ln -fs /usr/share/ca-certificates/1929.pem /etc/ssl/certs/1929.pem"
	I0930 04:07:51.129725    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.131008    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 10:37 /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.131035    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1929.pem
	I0930 04:07:51.132592    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1929.pem /etc/ssl/certs/51391683.0"
	I0930 04:07:51.135664    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19292.pem && ln -fs /usr/share/ca-certificates/19292.pem /etc/ssl/certs/19292.pem"
	I0930 04:07:51.138460    5073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.139860    5073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 10:37 /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.139882    5073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19292.pem
	I0930 04:07:51.141525    5073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19292.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 04:07:51.144780    5073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 04:07:51.146280    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 04:07:51.148254    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 04:07:51.150148    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 04:07:51.152033    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 04:07:51.153951    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 04:07:51.155940    5073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 04:07:51.157788    5073 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50491 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0930 04:07:51.157867    5073 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:07:51.172283    5073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 04:07:51.175399    5073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 04:07:51.175410    5073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 04:07:51.175439    5073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 04:07:51.178775    5073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 04:07:51.179760    5073 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-312000" does not appear in /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:07:51.180154    5073 kubeconfig.go:62] /Users/jenkins/minikube-integration/19734-1406/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-312000" cluster setting kubeconfig missing "stopped-upgrade-312000" context setting]
	I0930 04:07:51.180355    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:07:51.180812    5073 kapi.go:59] client config for stopped-upgrade-312000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10662e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:07:51.181148    5073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 04:07:51.184398    5073 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-312000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0930 04:07:51.184405    5073 kubeadm.go:1160] stopping kube-system containers ...
	I0930 04:07:51.184458    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 04:07:51.195228    5073 docker.go:483] Stopping containers: [7204ff5e6c12 a6e35c8796d8 5c05fceb7aa1 6c0f2823a096 9a2747d15d5c 3d6f8a951f44 82cb48f54510 5590b05fa90f]
	I0930 04:07:51.195308    5073 ssh_runner.go:195] Run: docker stop 7204ff5e6c12 a6e35c8796d8 5c05fceb7aa1 6c0f2823a096 9a2747d15d5c 3d6f8a951f44 82cb48f54510 5590b05fa90f
	I0930 04:07:51.205936    5073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 04:07:51.211941    5073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:07:51.214578    5073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:07:51.214588    5073 kubeadm.go:157] found existing configuration files:
	
	I0930 04:07:51.214617    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf
	I0930 04:07:51.217500    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:07:51.217532    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:07:51.220393    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf
	I0930 04:07:51.222787    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:07:51.222813    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:07:51.225553    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf
	I0930 04:07:51.228420    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:07:51.228445    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:07:51.231230    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf
	I0930 04:07:51.233714    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:07:51.233737    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:07:51.236771    5073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:07:51.239587    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.260604    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.664368    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.808293    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.837546    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 04:07:51.866228    5073 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:07:51.866323    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.368415    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.868385    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:07:52.873003    5073 api_server.go:72] duration metric: took 1.006789917s to wait for apiserver process to appear ...
	I0930 04:07:52.873011    5073 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:07:52.873025    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:07:57.875033    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:07:57.875076    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:02.875411    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:02.875465    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:07.875944    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:07.875968    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:12.876355    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:12.876378    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:17.876979    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:17.877047    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:22.878023    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:22.878065    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:27.879224    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:27.879263    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:32.880648    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:32.880674    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:37.882404    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:37.882461    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:42.884680    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:42.884706    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:47.884920    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:47.884944    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:08:52.887084    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:08:52.887273    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:08:52.898461    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:08:52.898561    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:08:52.908802    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:08:52.908879    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:08:52.918684    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:08:52.918776    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:08:52.928742    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:08:52.928822    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:08:52.939123    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:08:52.939215    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:08:52.949240    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:08:52.949323    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:08:52.959286    5073 logs.go:276] 0 containers: []
	W0930 04:08:52.959298    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:08:52.959373    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:08:52.969798    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:08:52.969814    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:08:52.969820    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:08:53.009704    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:08:53.009713    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:08:53.088188    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:08:53.088202    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:08:53.100292    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:08:53.100308    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:08:53.112011    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:08:53.112021    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:08:53.129133    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:08:53.129144    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:08:53.144933    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:08:53.144945    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:08:53.160977    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:08:53.160988    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:08:53.185893    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:08:53.185902    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:08:53.200789    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:08:53.200800    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:08:53.220505    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:08:53.220516    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:08:53.236129    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:08:53.236139    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:08:53.253037    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:08:53.253047    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:08:53.257756    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:08:53.257765    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:08:53.283990    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:08:53.284009    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:08:53.299252    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:08:53.299262    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:08:55.811471    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:00.813759    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:00.813928    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:00.824548    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:00.824640    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:00.834620    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:00.834708    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:00.845587    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:00.845676    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:00.858051    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:00.858137    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:00.868896    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:00.868979    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:00.880896    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:00.880979    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:00.891730    5073 logs.go:276] 0 containers: []
	W0930 04:09:00.891756    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:00.891833    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:00.902450    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:00.902470    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:00.902476    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:00.915851    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:00.915863    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:00.933667    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:00.933678    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:00.945643    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:00.945653    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:00.950025    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:00.950033    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:00.975037    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:00.975048    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:00.989596    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:00.989606    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:01.003224    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:01.003235    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:01.019726    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:01.019737    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:01.043904    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:01.043913    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:01.055341    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:01.055352    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:01.067233    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:01.067244    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:01.104713    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:01.104725    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:01.141025    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:01.141039    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:01.153295    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:01.153306    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:01.168334    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:01.168346    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:03.683736    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:08.686169    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:08.686332    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:08.699336    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:08.699429    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:08.710163    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:08.710257    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:08.720486    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:08.720579    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:08.731515    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:08.731601    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:08.741919    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:08.742005    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:08.752527    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:08.752605    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:08.762944    5073 logs.go:276] 0 containers: []
	W0930 04:09:08.762958    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:08.763033    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:08.777549    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:08.777566    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:08.777572    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:08.812119    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:08.812130    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:08.825935    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:08.825945    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:08.842831    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:08.842841    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:08.854271    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:08.854287    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:08.867886    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:08.867916    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:08.883865    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:08.883877    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:08.895429    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:08.895439    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:08.907449    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:08.907459    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:08.924145    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:08.924156    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:08.962163    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:08.962172    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:08.973667    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:08.973677    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:08.985741    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:08.985752    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:08.997504    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:08.997513    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:09.001647    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:09.001653    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:09.026552    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:09.026562    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:11.555208    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:16.557507    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:16.557643    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:16.571198    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:16.571297    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:16.582695    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:16.582782    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:16.593794    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:16.593874    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:16.605199    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:16.605297    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:16.616554    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:16.616640    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:16.627644    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:16.627724    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:16.638509    5073 logs.go:276] 0 containers: []
	W0930 04:09:16.638521    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:16.638591    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:16.649380    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:16.649402    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:16.649408    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:16.654000    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:16.654008    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:16.668027    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:16.668041    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:16.680382    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:16.680392    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:16.693341    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:16.693357    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:16.730952    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:16.730964    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:16.746820    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:16.746830    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:16.760966    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:16.760976    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:16.772425    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:16.772441    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:16.797896    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:16.797904    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:16.845085    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:16.845100    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:16.861612    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:16.861622    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:16.872946    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:16.872954    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:16.885350    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:16.885359    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:16.899556    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:16.899571    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:16.924357    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:16.924365    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:19.443402    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:24.445648    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:24.445888    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:24.468513    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:24.468638    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:24.483292    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:24.483387    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:24.495977    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:24.496068    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:24.506284    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:24.506365    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:24.519805    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:24.519890    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:24.530324    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:24.530401    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:24.540931    5073 logs.go:276] 0 containers: []
	W0930 04:09:24.540942    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:24.541012    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:24.551338    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:24.551357    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:24.551362    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:24.587206    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:24.587219    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:24.612857    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:24.612874    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:24.629293    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:24.629305    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:24.640944    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:24.640954    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:24.653010    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:24.653019    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:24.657380    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:24.657387    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:24.684927    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:24.684937    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:24.696744    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:24.696754    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:24.734223    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:24.734231    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:24.747666    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:24.747678    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:24.758978    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:24.758987    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:24.770614    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:24.770624    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:24.786185    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:24.786202    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:24.804463    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:24.804475    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:24.830225    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:24.830245    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:27.344509    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:32.346652    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:32.346868    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:32.363877    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:32.363963    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:32.378478    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:32.378570    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:32.390158    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:32.390245    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:32.400467    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:32.400551    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:32.410735    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:32.410821    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:32.421222    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:32.421302    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:32.431301    5073 logs.go:276] 0 containers: []
	W0930 04:09:32.431313    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:32.431385    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:32.441990    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:32.442008    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:32.442013    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:32.453538    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:32.453549    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:32.465188    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:32.465199    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:32.490610    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:32.490618    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:32.504763    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:32.504773    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:32.516092    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:32.516103    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:32.528515    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:32.528527    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:32.565721    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:32.565734    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:32.600800    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:32.600811    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:32.617928    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:32.617939    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:32.634802    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:32.634813    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:32.649117    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:32.649126    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:32.653339    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:32.653347    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:32.677342    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:32.677358    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:32.691534    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:32.691544    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:32.703405    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:32.703417    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:35.221142    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:40.223479    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:40.223736    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:40.248036    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:40.248178    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:40.264364    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:40.264465    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:40.278285    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:40.278381    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:40.292681    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:40.292773    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:40.303316    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:40.303398    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:40.313556    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:40.313638    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:40.324049    5073 logs.go:276] 0 containers: []
	W0930 04:09:40.324062    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:40.324137    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:40.334502    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:40.334520    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:40.334526    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:40.351946    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:40.351957    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:40.363781    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:40.363792    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:40.402513    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:40.402529    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:40.406650    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:40.406656    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:40.420642    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:40.420651    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:40.432106    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:40.432116    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:40.444538    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:40.444548    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:40.479337    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:40.479352    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:40.493975    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:40.493985    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:40.511978    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:40.511993    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:40.528671    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:40.528681    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:40.546160    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:40.546171    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:40.557523    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:40.557537    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:40.582281    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:40.582293    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:40.608589    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:40.608600    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:43.122184    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:48.122846    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:48.123100    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:48.147360    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:48.147486    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:48.164236    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:48.164337    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:48.176892    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:48.176983    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:48.188659    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:48.188743    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:48.198820    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:48.198902    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:48.209212    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:48.209291    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:48.219109    5073 logs.go:276] 0 containers: []
	W0930 04:09:48.219125    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:48.219191    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:48.229839    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:48.229861    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:48.229866    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:48.241847    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:48.241859    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:48.256707    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:48.256717    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:48.283126    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:48.283137    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:48.294755    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:48.294769    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:48.307127    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:48.307140    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:48.319612    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:48.319627    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:48.345112    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:48.345125    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:48.383558    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:48.383565    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:48.387828    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:48.387835    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:48.403347    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:48.403360    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:48.423277    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:48.423290    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:48.446707    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:48.446714    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:48.481837    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:48.481852    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:48.496013    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:48.496025    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:48.513291    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:48.513303    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:51.024906    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:09:56.024843    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:09:56.025407    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:09:56.060880    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:09:56.061048    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:09:56.079913    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:09:56.080031    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:09:56.094550    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:09:56.094647    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:09:56.106928    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:09:56.107008    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:09:56.117886    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:09:56.117965    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:09:56.128899    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:09:56.128984    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:09:56.139605    5073 logs.go:276] 0 containers: []
	W0930 04:09:56.139619    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:09:56.139695    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:09:56.150505    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:09:56.150523    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:09:56.150531    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:09:56.189324    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:09:56.189334    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:09:56.223925    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:09:56.223936    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:09:56.247952    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:09:56.247962    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:09:56.261709    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:09:56.261722    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:09:56.280202    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:09:56.280213    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:09:56.291667    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:09:56.291680    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:09:56.306071    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:09:56.306084    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:09:56.319893    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:09:56.319903    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:09:56.336809    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:09:56.336820    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:09:56.348920    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:09:56.348930    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:09:56.372028    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:09:56.372036    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:09:56.384245    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:09:56.384256    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:09:56.388388    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:09:56.388395    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:09:56.413218    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:09:56.413228    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:09:56.427234    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:09:56.427244    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:09:58.941485    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:03.942345    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:03.942630    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:03.966930    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:03.967067    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:03.982422    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:03.982517    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:03.994696    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:03.994781    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:04.005479    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:04.005563    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:04.016184    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:04.016267    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:04.028507    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:04.028593    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:04.038616    5073 logs.go:276] 0 containers: []
	W0930 04:10:04.038627    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:04.038691    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:04.049054    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:04.049074    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:04.049080    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:04.089044    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:04.089059    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:04.103222    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:04.103233    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:04.114837    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:04.114850    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:04.126837    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:04.126851    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:04.150167    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:04.150177    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:04.186215    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:04.186231    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:04.211192    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:04.211207    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:04.225354    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:04.225369    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:04.242759    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:04.242773    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:04.260864    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:04.260881    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:04.275620    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:04.275631    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:04.287438    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:04.287447    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:04.299552    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:04.299565    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:04.312558    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:04.312571    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:04.316802    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:04.316812    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:06.830507    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:11.831764    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:11.831889    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:11.843394    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:11.843493    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:11.854370    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:11.854452    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:11.865083    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:11.865171    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:11.878654    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:11.878745    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:11.888668    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:11.888750    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:11.899247    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:11.899330    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:11.909876    5073 logs.go:276] 0 containers: []
	W0930 04:10:11.909887    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:11.909956    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:11.920348    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:11.920369    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:11.920374    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:11.959424    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:11.959434    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:11.984058    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:11.984068    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:11.998343    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:11.998355    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:12.010493    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:12.010505    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:12.028520    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:12.028533    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:12.032851    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:12.032860    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:12.067385    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:12.067401    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:12.081510    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:12.081519    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:12.092806    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:12.092817    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:12.112706    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:12.112717    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:12.126802    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:12.126813    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:12.138259    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:12.138271    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:12.160981    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:12.160992    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:12.171852    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:12.171864    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:12.198712    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:12.198724    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:14.726176    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:19.728224    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:19.728461    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:19.751390    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:19.751537    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:19.768070    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:19.768172    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:19.780835    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:19.780925    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:19.792038    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:19.792116    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:19.802402    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:19.802474    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:19.821292    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:19.821366    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:19.831165    5073 logs.go:276] 0 containers: []
	W0930 04:10:19.831174    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:19.831237    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:19.841892    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:19.841910    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:19.841916    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:19.880298    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:19.880309    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:19.894796    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:19.894810    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:19.931262    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:19.931272    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:19.935695    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:19.935702    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:19.948712    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:19.948724    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:19.960161    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:19.960169    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:19.973329    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:19.973338    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:19.998798    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:19.998808    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:20.012925    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:20.012933    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:20.024602    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:20.024614    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:20.041171    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:20.041180    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:20.052911    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:20.052922    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:20.067210    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:20.067221    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:20.079015    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:20.079025    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:20.095757    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:20.095768    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:22.620379    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:27.622396    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:27.622726    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:27.648166    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:27.648315    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:27.665908    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:27.666010    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:27.683646    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:27.683736    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:27.694607    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:27.694701    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:27.705006    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:27.705092    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:27.715423    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:27.715521    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:27.725591    5073 logs.go:276] 0 containers: []
	W0930 04:10:27.725602    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:27.725668    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:27.736566    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:27.736585    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:27.736591    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:27.750081    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:27.750096    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:27.766767    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:27.766778    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:27.790096    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:27.790104    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:27.802388    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:27.802398    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:27.841159    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:27.841167    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:27.876126    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:27.876141    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:27.901649    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:27.901660    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:27.916284    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:27.916295    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:27.927287    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:27.927297    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:27.931533    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:27.931542    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:27.954698    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:27.954707    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:27.973270    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:27.973281    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:27.984602    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:27.984611    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:27.996543    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:27.996558    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:28.008166    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:28.008176    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:30.523824    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:35.525846    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:35.526055    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:35.547854    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:35.548013    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:35.564655    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:35.564761    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:35.577143    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:35.577231    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:35.588140    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:35.588227    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:35.598941    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:35.599018    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:35.609237    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:35.609314    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:35.619762    5073 logs.go:276] 0 containers: []
	W0930 04:10:35.619774    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:35.619847    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:35.630231    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:35.630247    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:35.630251    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:35.648223    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:35.648236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:35.673069    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:35.673080    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:35.691282    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:35.691293    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:35.703052    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:35.703062    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:35.714769    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:35.714779    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:35.726820    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:35.726831    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:35.731216    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:35.731227    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:35.743965    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:35.743982    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:35.755225    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:35.755235    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:35.767253    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:35.767264    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:35.802712    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:35.802726    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:35.817265    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:35.817275    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:35.842156    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:35.842164    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:35.881101    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:35.881114    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:35.904402    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:35.904412    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:38.423628    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:43.425856    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:43.426042    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:43.441076    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:43.441173    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:43.453998    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:43.454087    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:43.464751    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:43.464824    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:43.476817    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:43.476897    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:43.487118    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:43.487216    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:43.508004    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:43.508087    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:43.517876    5073 logs.go:276] 0 containers: []
	W0930 04:10:43.517891    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:43.517959    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:43.528523    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:43.528540    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:43.528546    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:43.543101    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:43.543114    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:43.582223    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:43.582232    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:43.617287    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:43.617298    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:43.642850    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:43.642866    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:43.660224    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:43.660236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:43.672437    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:43.672448    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:43.694796    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:43.694806    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:43.698905    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:43.698914    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:43.712553    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:43.712563    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:43.724186    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:43.724198    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:43.736819    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:43.736832    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:43.748860    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:43.748872    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:43.761182    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:43.761197    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:43.777710    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:43.777724    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:43.792470    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:43.792484    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:46.305913    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:51.308154    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:51.308410    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:51.328927    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:51.329062    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:51.343041    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:51.343132    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:51.355696    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:51.355780    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:51.365941    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:51.366027    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:51.380751    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:51.380832    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:51.397688    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:51.397788    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:51.419361    5073 logs.go:276] 0 containers: []
	W0930 04:10:51.419373    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:51.419442    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:51.429761    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:51.429784    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:51.429790    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:51.449106    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:51.449121    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:10:51.460574    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:51.460584    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:51.497986    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:51.497996    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:51.514848    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:51.514860    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:51.529260    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:51.529271    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:51.540800    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:51.540813    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:51.558443    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:51.558453    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:51.571180    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:51.571193    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:51.575231    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:51.575237    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:51.588713    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:51.588723    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:51.620006    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:51.620016    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:51.634301    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:51.634316    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:51.649478    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:51.649488    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:51.672278    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:51.672287    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:51.683870    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:51.683884    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:54.223094    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:10:59.225422    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:10:59.225620    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:10:59.238521    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:10:59.238621    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:10:59.249236    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:10:59.249316    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:10:59.260111    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:10:59.260191    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:10:59.270713    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:10:59.270802    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:10:59.281540    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:10:59.281627    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:10:59.294489    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:10:59.294576    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:10:59.304919    5073 logs.go:276] 0 containers: []
	W0930 04:10:59.304935    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:10:59.305005    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:10:59.314745    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:10:59.314763    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:10:59.314768    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:10:59.329108    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:10:59.329119    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:10:59.341308    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:10:59.341321    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:10:59.379419    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:10:59.379430    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:10:59.395170    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:10:59.395184    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:10:59.414342    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:10:59.414368    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:10:59.428033    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:10:59.428044    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:10:59.452347    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:10:59.452357    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:10:59.464072    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:10:59.464083    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:10:59.498755    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:10:59.498767    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:10:59.512518    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:10:59.512528    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:10:59.538620    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:10:59.538632    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:10:59.549734    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:10:59.549744    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:10:59.566392    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:10:59.566409    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:10:59.570950    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:10:59.570956    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:10:59.582726    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:10:59.582742    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:02.096471    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:07.098782    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:07.099041    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:07.119539    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:07.119643    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:07.133731    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:07.133831    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:07.167991    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:07.168074    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:07.178778    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:07.178868    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:07.189624    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:07.189704    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:07.200646    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:07.200733    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:07.210848    5073 logs.go:276] 0 containers: []
	W0930 04:11:07.210860    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:07.210934    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:07.221412    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:07.221429    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:07.221438    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:07.244918    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:07.244926    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:07.284576    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:07.284592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:07.313736    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:07.313746    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:07.326327    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:07.326342    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:07.342740    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:07.342751    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:07.360206    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:07.360217    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:07.371628    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:07.371638    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:07.385790    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:07.385800    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:07.397397    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:07.397408    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:07.401976    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:07.401983    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:07.436272    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:07.436284    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:07.449270    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:07.449284    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:07.461577    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:07.461592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:07.479502    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:07.479519    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:07.494354    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:07.494364    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:10.007757    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:15.009986    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:15.010275    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:15.032597    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:15.032740    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:15.047878    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:15.048002    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:15.060630    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:15.060718    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:15.071330    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:15.071418    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:15.081499    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:15.081586    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:15.092116    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:15.092185    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:15.106615    5073 logs.go:276] 0 containers: []
	W0930 04:11:15.106628    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:15.106721    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:15.117565    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:15.117584    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:15.117592    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:15.134150    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:15.134160    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:15.157244    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:15.157254    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:15.161464    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:15.161480    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:15.173787    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:15.173799    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:15.186755    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:15.186767    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:15.197972    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:15.197981    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:15.222110    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:15.222124    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:15.238808    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:15.238823    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:15.256380    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:15.256390    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:15.268318    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:15.268328    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:15.299073    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:15.299082    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:15.333717    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:15.333729    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:15.347745    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:15.347754    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:15.359211    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:15.359222    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:15.375494    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:15.375504    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:17.914070    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:22.916256    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:22.916479    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:22.931027    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:22.931113    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:22.942906    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:22.942996    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:22.953535    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:22.953623    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:22.964038    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:22.964114    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:22.982442    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:22.982513    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:22.995015    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:22.995102    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:23.005377    5073 logs.go:276] 0 containers: []
	W0930 04:11:23.005387    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:23.005451    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:23.015724    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:23.015740    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:23.015746    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:23.054815    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:23.054822    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:23.070103    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:23.070112    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:23.090106    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:23.090116    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:23.104955    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:23.104965    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:23.122094    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:23.122104    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:23.134858    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:23.134868    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:23.172653    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:23.172664    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:23.188303    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:23.188319    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:23.204882    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:23.204893    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:23.216596    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:23.216611    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:23.241218    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:23.241228    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:23.252894    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:23.252903    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:23.272323    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:23.272335    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:23.286415    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:23.286425    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:23.298415    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:23.298425    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:25.824383    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:30.826617    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:30.826786    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:30.838501    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:30.838593    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:30.849737    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:30.849827    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:30.865835    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:30.865926    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:30.876669    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:30.876754    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:30.887921    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:30.888007    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:30.898393    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:30.898479    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:30.908427    5073 logs.go:276] 0 containers: []
	W0930 04:11:30.908438    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:30.908507    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:30.918916    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:30.918934    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:30.918940    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:30.923445    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:30.923451    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:30.948954    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:30.948967    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:30.960574    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:30.960589    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:30.978982    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:30.978998    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:31.017912    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:31.017930    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:31.052834    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:31.052850    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:31.064305    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:31.064316    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:31.081907    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:31.081917    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:31.093816    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:31.093824    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:31.115538    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:31.115545    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:31.129531    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:31.129546    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:31.149771    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:31.149781    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:31.161634    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:31.161645    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:31.173931    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:31.173940    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:31.188112    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:31.188124    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:33.701806    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:38.704621    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:38.705188    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:38.740517    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:38.740691    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:38.761834    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:38.761944    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:38.777521    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:38.777614    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:38.790537    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:38.790625    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:38.801358    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:38.801443    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:38.812344    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:38.812432    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:38.823255    5073 logs.go:276] 0 containers: []
	W0930 04:11:38.823268    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:38.823343    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:38.834286    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:38.834304    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:38.834310    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:38.847326    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:38.847336    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:38.853279    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:38.853288    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:38.892557    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:38.892568    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:38.907079    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:38.907088    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:38.922089    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:38.922100    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:38.933476    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:38.933488    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:38.971984    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:38.971991    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:38.997216    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:38.997227    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:39.009631    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:39.009642    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:39.021595    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:39.021606    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:39.044473    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:39.044481    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:39.056293    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:39.056309    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:39.070235    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:39.070245    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:39.088684    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:39.088695    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:39.108182    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:39.108196    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:41.622628    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:46.625354    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:46.625652    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:11:46.648940    5073 logs.go:276] 2 containers: [ed13ab559759 5c05fceb7aa1]
	I0930 04:11:46.649063    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:11:46.665028    5073 logs.go:276] 2 containers: [39ed46927cbf 9a2747d15d5c]
	I0930 04:11:46.665125    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:11:46.678195    5073 logs.go:276] 1 containers: [857ff5a64aef]
	I0930 04:11:46.678276    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:11:46.689704    5073 logs.go:276] 2 containers: [f87365fcd967 82cb48f54510]
	I0930 04:11:46.689789    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:11:46.699814    5073 logs.go:276] 1 containers: [341e764f9485]
	I0930 04:11:46.699897    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:11:46.710469    5073 logs.go:276] 2 containers: [e0d1000bec45 7204ff5e6c12]
	I0930 04:11:46.710543    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:11:46.721137    5073 logs.go:276] 0 containers: []
	W0930 04:11:46.721151    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:11:46.721220    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:11:46.732074    5073 logs.go:276] 1 containers: [e90f49b06c7e]
	I0930 04:11:46.732090    5073 logs.go:123] Gathering logs for etcd [39ed46927cbf] ...
	I0930 04:11:46.732095    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ed46927cbf"
	I0930 04:11:46.745646    5073 logs.go:123] Gathering logs for etcd [9a2747d15d5c] ...
	I0930 04:11:46.745661    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2747d15d5c"
	I0930 04:11:46.759809    5073 logs.go:123] Gathering logs for kube-scheduler [f87365fcd967] ...
	I0930 04:11:46.759825    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87365fcd967"
	I0930 04:11:46.772452    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:11:46.772466    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:11:46.786643    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:11:46.786653    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:11:46.808990    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:11:46.808999    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:11:46.846540    5073 logs.go:123] Gathering logs for kube-apiserver [ed13ab559759] ...
	I0930 04:11:46.846556    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed13ab559759"
	I0930 04:11:46.861553    5073 logs.go:123] Gathering logs for kube-apiserver [5c05fceb7aa1] ...
	I0930 04:11:46.861569    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c05fceb7aa1"
	I0930 04:11:46.887228    5073 logs.go:123] Gathering logs for kube-scheduler [82cb48f54510] ...
	I0930 04:11:46.887239    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cb48f54510"
	I0930 04:11:46.904127    5073 logs.go:123] Gathering logs for kube-proxy [341e764f9485] ...
	I0930 04:11:46.904139    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 341e764f9485"
	I0930 04:11:46.916670    5073 logs.go:123] Gathering logs for kube-controller-manager [7204ff5e6c12] ...
	I0930 04:11:46.916681    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7204ff5e6c12"
	I0930 04:11:46.931227    5073 logs.go:123] Gathering logs for kube-controller-manager [e0d1000bec45] ...
	I0930 04:11:46.931236    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d1000bec45"
	I0930 04:11:46.948883    5073 logs.go:123] Gathering logs for storage-provisioner [e90f49b06c7e] ...
	I0930 04:11:46.948892    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e90f49b06c7e"
	I0930 04:11:46.960973    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:11:46.960984    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:11:46.965052    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:11:46.965061    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:11:47.005517    5073 logs.go:123] Gathering logs for coredns [857ff5a64aef] ...
	I0930 04:11:47.005528    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857ff5a64aef"
	I0930 04:11:49.519440    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:11:54.521781    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:11:54.521873    5073 kubeadm.go:597] duration metric: took 4m3.362254917s to restartPrimaryControlPlane
	W0930 04:11:54.521939    5073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0930 04:11:54.521969    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0930 04:11:55.561439    5073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.039476291s)
	I0930 04:11:55.561518    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 04:11:55.566321    5073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 04:11:55.569014    5073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 04:11:55.571627    5073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 04:11:55.571634    5073 kubeadm.go:157] found existing configuration files:
	
	I0930 04:11:55.571662    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf
	I0930 04:11:55.574683    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 04:11:55.574715    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 04:11:55.577486    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf
	I0930 04:11:55.579778    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 04:11:55.579801    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 04:11:55.582827    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf
	I0930 04:11:55.585659    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 04:11:55.585684    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 04:11:55.588126    5073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf
	I0930 04:11:55.591120    5073 kubeadm.go:163] "https://control-plane.minikube.internal:50491" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50491 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 04:11:55.591145    5073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 04:11:55.594085    5073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 04:11:55.610288    5073 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0930 04:11:55.610317    5073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 04:11:55.658032    5073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 04:11:55.658093    5073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 04:11:55.658142    5073 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 04:11:55.708736    5073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 04:11:55.712330    5073 out.go:235]   - Generating certificates and keys ...
	I0930 04:11:55.712365    5073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 04:11:55.712400    5073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 04:11:55.712504    5073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 04:11:55.712541    5073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 04:11:55.712626    5073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 04:11:55.712654    5073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 04:11:55.712681    5073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 04:11:55.712753    5073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 04:11:55.712804    5073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 04:11:55.712841    5073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 04:11:55.712902    5073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 04:11:55.712930    5073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 04:11:56.100578    5073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 04:11:56.654738    5073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 04:11:56.725664    5073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 04:11:56.861680    5073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 04:11:56.890354    5073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 04:11:56.890796    5073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 04:11:56.890829    5073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 04:11:56.975241    5073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 04:11:56.978433    5073 out.go:235]   - Booting up control plane ...
	I0930 04:11:56.978484    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 04:11:56.978529    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 04:11:56.978564    5073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 04:11:56.978610    5073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 04:11:56.978698    5073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 04:12:00.980612    5073 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002680 seconds
	I0930 04:12:00.980779    5073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 04:12:00.983777    5073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 04:12:01.501002    5073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 04:12:01.501290    5073 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-312000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 04:12:02.005201    5073 kubeadm.go:310] [bootstrap-token] Using token: 0avxwc.umyj1qdkitmbz22p
	I0930 04:12:02.012414    5073 out.go:235]   - Configuring RBAC rules ...
	I0930 04:12:02.012467    5073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 04:12:02.012521    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 04:12:02.019101    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 04:12:02.020037    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 04:12:02.021064    5073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 04:12:02.021970    5073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 04:12:02.025157    5073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 04:12:02.215642    5073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 04:12:02.409521    5073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 04:12:02.410027    5073 kubeadm.go:310] 
	I0930 04:12:02.410056    5073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 04:12:02.410059    5073 kubeadm.go:310] 
	I0930 04:12:02.410092    5073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 04:12:02.410099    5073 kubeadm.go:310] 
	I0930 04:12:02.410117    5073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 04:12:02.410154    5073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 04:12:02.410184    5073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 04:12:02.410191    5073 kubeadm.go:310] 
	I0930 04:12:02.410215    5073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 04:12:02.410219    5073 kubeadm.go:310] 
	I0930 04:12:02.410245    5073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 04:12:02.410247    5073 kubeadm.go:310] 
	I0930 04:12:02.410278    5073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 04:12:02.410311    5073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 04:12:02.410349    5073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 04:12:02.410352    5073 kubeadm.go:310] 
	I0930 04:12:02.410398    5073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 04:12:02.410435    5073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 04:12:02.410440    5073 kubeadm.go:310] 
	I0930 04:12:02.410487    5073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0avxwc.umyj1qdkitmbz22p \
	I0930 04:12:02.410543    5073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d \
	I0930 04:12:02.410556    5073 kubeadm.go:310] 	--control-plane 
	I0930 04:12:02.410561    5073 kubeadm.go:310] 
	I0930 04:12:02.410598    5073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 04:12:02.410601    5073 kubeadm.go:310] 
	I0930 04:12:02.410635    5073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0avxwc.umyj1qdkitmbz22p \
	I0930 04:12:02.410682    5073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72c345a63d908b27c1ed290ebc60ebd5e5e1c4e3ebfaa90fcb5390bc8578ae1d 
	I0930 04:12:02.410901    5073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 04:12:02.410991    5073 cni.go:84] Creating CNI manager for ""
	I0930 04:12:02.411003    5073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:12:02.414607    5073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 04:12:02.421553    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 04:12:02.424583    5073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 04:12:02.429357    5073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 04:12:02.429408    5073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 04:12:02.429424    5073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-312000 minikube.k8s.io/updated_at=2024_09_30T04_12_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=stopped-upgrade-312000 minikube.k8s.io/primary=true
	I0930 04:12:02.474703    5073 kubeadm.go:1113] duration metric: took 45.328917ms to wait for elevateKubeSystemPrivileges
	I0930 04:12:02.474721    5073 ops.go:34] apiserver oom_adj: -16
	I0930 04:12:02.474742    5073 kubeadm.go:394] duration metric: took 4m11.332896208s to StartCluster
	I0930 04:12:02.474756    5073 settings.go:142] acquiring lock: {Name:mk8d331f80592adde11c8565cba0670e3b2db485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:12:02.474856    5073 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:12:02.475272    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/kubeconfig: {Name:mkab83a5d15ec3b983b07760462d9a2ee8e3b4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:12:02.475495    5073 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:12:02.475524    5073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 04:12:02.475608    5073 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-312000"
	I0930 04:12:02.475615    5073 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-312000"
	W0930 04:12:02.475619    5073 addons.go:243] addon storage-provisioner should already be in state true
	I0930 04:12:02.475620    5073 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:12:02.475629    5073 host.go:66] Checking if "stopped-upgrade-312000" exists ...
	I0930 04:12:02.475639    5073 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-312000"
	I0930 04:12:02.475709    5073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-312000"
	I0930 04:12:02.475970    5073 retry.go:31] will retry after 689.330212ms: connect: dial unix /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/monitor: connect: connection refused
	I0930 04:12:02.476712    5073 kapi.go:59] client config for stopped-upgrade-312000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/stopped-upgrade-312000/client.key", CAFile:"/Users/jenkins/minikube-integration/19734-1406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10662e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 04:12:02.476865    5073 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-312000"
	W0930 04:12:02.476870    5073 addons.go:243] addon default-storageclass should already be in state true
	I0930 04:12:02.476876    5073 host.go:66] Checking if "stopped-upgrade-312000" exists ...
	I0930 04:12:02.477432    5073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 04:12:02.477437    5073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 04:12:02.477442    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:12:02.481526    5073 out.go:177] * Verifying Kubernetes components...
	I0930 04:12:02.487586    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 04:12:02.579041    5073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 04:12:02.584237    5073 api_server.go:52] waiting for apiserver process to appear ...
	I0930 04:12:02.584283    5073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 04:12:02.588308    5073 api_server.go:72] duration metric: took 112.803458ms to wait for apiserver process to appear ...
	I0930 04:12:02.588315    5073 api_server.go:88] waiting for apiserver healthz status ...
	I0930 04:12:02.588323    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:02.657507    5073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 04:12:02.960172    5073 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 04:12:02.960189    5073 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 04:12:03.171043    5073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 04:12:03.175122    5073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:12:03.175132    5073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 04:12:03.175148    5073 sshutil.go:53] new ssh client: &{IP:localhost Port:50456 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/stopped-upgrade-312000/id_rsa Username:docker}
	I0930 04:12:03.217687    5073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 04:12:07.590330    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:07.590381    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:12.590683    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:12.590738    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:17.591100    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:17.591170    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:22.591594    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:22.591653    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:27.592253    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:27.592287    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:32.593048    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:32.593077    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0930 04:12:32.961905    5073 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0930 04:12:32.965093    5073 out.go:177] * Enabled addons: storage-provisioner
	I0930 04:12:32.984692    5073 addons.go:510] duration metric: took 30.509712792s for enable addons: enabled=[storage-provisioner]
	I0930 04:12:37.594485    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:37.594541    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:42.596151    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:42.596199    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:47.598046    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:47.598088    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:52.600221    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:52.600261    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:12:57.602518    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:12:57.602577    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:02.604047    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:02.604203    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:02.615298    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:02.615389    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:02.625721    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:02.625808    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:02.636340    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:02.636425    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:02.646336    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:02.646414    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:02.657169    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:02.657251    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:02.667691    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:02.667778    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:02.677440    5073 logs.go:276] 0 containers: []
	W0930 04:13:02.677450    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:02.677514    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:02.690005    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:02.690022    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:02.690027    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:02.703848    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:02.703863    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:02.718500    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:02.718510    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:02.729841    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:02.729851    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:02.753710    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:02.753720    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:02.787870    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:02.787877    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:02.822310    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:02.822322    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:02.837219    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:02.837232    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:02.849377    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:02.849387    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:02.867180    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:02.867193    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:02.879062    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:02.879076    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:02.883328    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:02.883335    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:02.895007    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:02.895020    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:05.408573    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:10.410840    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:10.410963    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:10.421524    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:10.421619    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:10.431820    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:10.431904    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:10.442437    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:10.442523    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:10.453297    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:10.453372    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:10.463920    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:10.464008    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:10.475826    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:10.475907    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:10.486152    5073 logs.go:276] 0 containers: []
	W0930 04:13:10.486163    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:10.486231    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:10.497424    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:10.497442    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:10.497449    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:10.515374    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:10.515383    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:10.529625    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:10.529635    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:10.543739    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:10.543750    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:10.568039    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:10.568051    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:10.580054    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:10.580064    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:10.597428    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:10.597440    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:10.612248    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:10.612263    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:10.647949    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:10.647957    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:10.652113    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:10.652119    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:10.688165    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:10.688175    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:10.707481    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:10.707492    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:10.718855    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:10.718864    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:13.232126    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:18.234912    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:18.235466    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:18.272441    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:18.272609    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:18.299441    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:18.299563    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:18.313399    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:18.313494    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:18.329690    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:18.329784    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:18.340693    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:18.340775    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:18.352292    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:18.352374    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:18.362584    5073 logs.go:276] 0 containers: []
	W0930 04:13:18.362595    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:18.362667    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:18.373644    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:18.373659    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:18.373665    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:18.411373    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:18.411384    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:18.423858    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:18.423874    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:18.435716    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:18.435725    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:18.455680    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:18.455695    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:18.476269    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:18.476280    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:18.501502    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:18.501512    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:18.538218    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:18.538235    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:18.542846    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:18.542852    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:18.554379    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:18.554392    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:18.567314    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:18.567324    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:18.586021    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:18.586036    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:18.601069    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:18.601080    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:21.116803    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:26.119036    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:26.119526    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:26.159257    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:26.159407    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:26.180465    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:26.180599    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:26.196002    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:26.196089    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:26.208641    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:26.208724    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:26.219674    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:26.219751    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:26.230181    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:26.230272    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:26.240529    5073 logs.go:276] 0 containers: []
	W0930 04:13:26.240545    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:26.240622    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:26.251192    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:26.251207    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:26.251213    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:26.274041    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:26.274048    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:26.285346    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:26.285360    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:26.321127    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:26.321139    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:26.336271    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:26.336284    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:26.348290    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:26.348300    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:26.365620    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:26.365630    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:26.379491    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:26.379501    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:26.398034    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:26.398047    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:26.409544    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:26.409556    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:26.445368    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:26.445376    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:26.449671    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:26.449680    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:26.463840    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:26.463853    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:28.979448    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:33.981607    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:33.981849    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:34.011716    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:34.011834    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:34.027596    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:34.027704    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:34.040535    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:34.040623    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:34.052029    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:34.052101    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:34.062318    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:34.062391    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:34.073041    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:34.073113    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:34.082593    5073 logs.go:276] 0 containers: []
	W0930 04:13:34.082602    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:34.082659    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:34.092848    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:34.092863    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:34.092868    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:34.113994    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:34.114007    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:34.125508    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:34.125523    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:34.161066    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:34.161077    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:34.175474    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:34.175487    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:34.186913    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:34.186926    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:34.201360    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:34.201373    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:34.213034    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:34.213049    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:34.238140    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:34.238147    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:34.242402    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:34.242410    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:34.276261    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:34.276275    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:34.292912    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:34.292921    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:34.303717    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:34.303727    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:36.817105    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:41.819809    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:41.820193    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:41.850079    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:41.850218    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:41.868912    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:41.869008    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:41.882653    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:41.882729    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:41.894417    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:41.894513    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:41.904816    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:41.904890    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:41.914822    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:41.914894    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:41.925092    5073 logs.go:276] 0 containers: []
	W0930 04:13:41.925105    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:41.925165    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:41.935782    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:41.935795    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:41.935803    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:41.952734    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:41.952747    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:41.977654    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:41.977662    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:42.013480    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:42.013490    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:42.018277    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:42.018286    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:42.054201    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:42.054211    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:42.068358    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:42.068368    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:42.079288    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:42.079297    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:42.093627    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:42.093637    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:42.105185    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:42.105193    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:42.120281    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:42.120294    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:42.131827    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:42.131841    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:42.143556    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:42.143567    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:44.657120    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:49.659780    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:49.660350    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:49.701202    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:49.701370    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:49.723436    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:49.723563    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:49.738782    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:49.738876    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:49.754757    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:49.754842    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:49.765522    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:49.765610    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:49.775985    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:49.776055    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:49.789832    5073 logs.go:276] 0 containers: []
	W0930 04:13:49.789846    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:49.789940    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:49.800248    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:49.800269    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:49.800275    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:49.835499    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:49.835513    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:49.850590    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:49.850602    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:49.866512    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:49.866524    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:49.878560    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:49.878576    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:13:49.903145    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:49.903154    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:49.937949    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:49.937957    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:49.950337    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:49.950349    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:49.961989    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:49.961998    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:49.976270    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:49.976280    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:49.987657    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:49.987669    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:50.005117    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:50.005127    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:50.016603    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:50.016613    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:52.522037    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:13:57.524771    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:13:57.525289    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:13:57.567398    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:13:57.567538    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:13:57.585646    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:13:57.585739    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:13:57.598116    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:13:57.598204    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:13:57.623714    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:13:57.623788    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:13:57.639000    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:13:57.639092    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:13:57.649679    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:13:57.649760    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:13:57.659702    5073 logs.go:276] 0 containers: []
	W0930 04:13:57.659713    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:13:57.659775    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:13:57.671640    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:13:57.671660    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:13:57.671665    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:13:57.693783    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:13:57.693794    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:13:57.705766    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:13:57.705776    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:13:57.739977    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:13:57.739987    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:13:57.744456    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:13:57.744465    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:13:57.780195    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:13:57.780211    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:13:57.791740    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:13:57.791756    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:13:57.806102    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:13:57.806112    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:13:57.824159    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:13:57.824174    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:13:57.835644    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:13:57.835655    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:13:57.851447    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:13:57.851456    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:13:57.867354    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:13:57.867365    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:13:57.878886    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:13:57.878898    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:00.403194    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:05.405968    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:05.406576    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:05.443699    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:05.443866    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:05.464864    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:05.465022    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:05.479772    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:14:05.479856    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:05.492047    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:05.492136    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:05.502903    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:05.502983    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:05.513342    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:05.513425    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:05.523915    5073 logs.go:276] 0 containers: []
	W0930 04:14:05.523926    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:05.523997    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:05.536326    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:05.536339    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:05.536344    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:05.541150    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:05.541157    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:05.556046    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:05.556059    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:05.567755    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:05.567768    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:05.579312    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:05.579322    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:05.596722    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:05.596733    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:05.621259    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:05.621268    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:05.632760    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:05.632772    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:05.668095    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:05.668105    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:05.705933    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:05.705944    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:05.719980    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:05.719990    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:05.734389    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:05.734402    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:05.746078    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:05.746089    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:08.258939    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:13.259834    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:13.260386    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:13.297662    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:13.297827    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:13.319284    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:13.319385    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:13.334499    5073 logs.go:276] 2 containers: [d2a057a51189 fc29cb8eea7a]
	I0930 04:14:13.334594    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:13.346841    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:13.346920    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:13.357260    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:13.357347    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:13.368153    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:13.368226    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:13.378528    5073 logs.go:276] 0 containers: []
	W0930 04:14:13.378542    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:13.378616    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:13.389051    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:13.389066    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:13.389072    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:13.403481    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:13.403491    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:13.414743    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:13.414752    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:13.438120    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:13.438128    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:13.442215    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:13.442222    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:13.455901    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:13.455914    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:13.469529    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:13.469541    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:13.481423    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:13.481437    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:13.493187    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:13.493199    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:13.512020    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:13.512034    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:13.529427    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:13.529438    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:13.540950    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:13.540962    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:13.574380    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:13.574387    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:16.112439    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:21.114967    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:21.115471    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:21.151176    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:21.151331    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:21.168827    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:21.168934    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:21.182218    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:14:21.182311    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:21.193506    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:21.193587    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:21.204310    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:21.204399    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:21.214916    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:21.214995    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:21.225574    5073 logs.go:276] 0 containers: []
	W0930 04:14:21.225586    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:21.225655    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:21.236231    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:21.236248    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:21.236254    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:21.248444    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:21.248456    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:21.262721    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:21.262733    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:21.274669    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:21.274680    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:21.296020    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:21.296031    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:21.319814    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:14:21.319821    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:14:21.334876    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:21.334890    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:21.346615    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:21.346632    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:21.361672    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:21.361682    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:21.382015    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:14:21.382025    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:14:21.392886    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:21.392897    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:21.404310    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:21.404326    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:21.440533    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:21.440541    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:21.445148    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:21.445158    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:21.479409    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:21.479422    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:23.992271    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:28.995037    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:28.995569    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:29.035668    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:29.035817    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:29.065475    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:29.065596    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:29.080059    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:14:29.080178    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:29.094044    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:29.094133    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:29.104434    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:29.104506    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:29.114412    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:29.114489    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:29.124274    5073 logs.go:276] 0 containers: []
	W0930 04:14:29.124286    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:29.124349    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:29.134455    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:29.134475    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:29.134483    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:29.148764    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:29.148777    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:29.166580    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:29.166591    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:29.181038    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:29.181049    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:29.216217    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:29.216226    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:29.220248    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:29.220254    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:29.254639    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:29.254649    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:29.268879    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:14:29.268893    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:14:29.280843    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:29.280852    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:29.306178    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:29.306185    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:29.317873    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:14:29.317886    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:14:29.329597    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:29.329611    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:29.341847    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:29.341858    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:29.353643    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:29.353656    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:29.365574    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:29.365584    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:31.891977    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:36.892910    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:36.893175    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:36.918039    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:36.918199    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:36.934885    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:36.934994    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:36.948880    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:14:36.948966    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:36.960561    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:36.960628    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:36.970741    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:36.970824    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:36.980963    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:36.981028    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:36.990805    5073 logs.go:276] 0 containers: []
	W0930 04:14:36.990815    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:36.990884    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:37.001189    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:37.001207    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:37.001213    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:37.019116    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:37.019129    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:37.035322    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:37.035335    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:37.048951    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:14:37.048964    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:14:37.060010    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:37.060024    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:37.071086    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:37.071100    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:37.084553    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:37.084565    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:37.096223    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:37.096237    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:37.108276    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:37.108286    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:37.144063    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:37.144074    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:37.148941    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:37.148951    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:37.184078    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:14:37.184090    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:14:37.203874    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:37.203885    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:37.215704    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:37.215714    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:37.232740    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:37.232751    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:39.758139    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:44.760715    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:44.760818    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:44.773001    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:44.773098    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:44.785733    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:44.785803    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:44.797877    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:14:44.797982    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:44.809085    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:44.809166    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:44.823977    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:44.824049    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:44.835410    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:44.835509    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:44.846096    5073 logs.go:276] 0 containers: []
	W0930 04:14:44.846109    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:44.846177    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:44.858040    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:44.858058    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:44.858065    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:44.895085    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:44.895103    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:44.908299    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:44.908310    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:44.921802    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:44.921811    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:44.933702    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:44.933714    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:44.961005    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:14:44.961018    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:14:44.973264    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:14:44.973275    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:14:44.992038    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:44.992052    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:45.006874    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:45.006883    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:45.019181    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:45.019194    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:45.035172    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:45.035187    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:45.040950    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:45.040963    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:45.055308    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:45.055317    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:45.067962    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:45.067974    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:45.105290    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:45.105302    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:47.626576    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:14:52.627843    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:14:52.628296    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:14:52.668425    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:14:52.668541    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:14:52.683487    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:14:52.683594    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:14:52.695946    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:14:52.696025    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:14:52.706574    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:14:52.706657    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:14:52.717366    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:14:52.717448    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:14:52.728167    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:14:52.728239    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:14:52.738865    5073 logs.go:276] 0 containers: []
	W0930 04:14:52.738880    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:14:52.738951    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:14:52.751017    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:14:52.751035    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:14:52.751041    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:14:52.765487    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:14:52.765499    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:14:52.780313    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:14:52.780327    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:14:52.792616    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:14:52.792626    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:14:52.816748    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:14:52.816755    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:14:52.851899    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:14:52.851911    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:14:52.864167    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:14:52.864180    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:14:52.882924    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:14:52.882935    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:14:52.894753    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:14:52.894768    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:14:52.915029    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:14:52.915040    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:14:52.948219    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:14:52.948226    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:14:52.961853    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:14:52.961863    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:14:52.973497    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:14:52.973507    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:14:52.985151    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:14:52.985160    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:14:52.989724    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:14:52.989733    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:14:55.503503    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:00.506363    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:00.506801    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:00.545309    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:00.545480    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:00.566873    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:00.567015    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:00.584916    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:00.585013    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:00.599290    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:00.599369    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:00.609762    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:00.609842    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:00.619863    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:00.619939    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:00.629840    5073 logs.go:276] 0 containers: []
	W0930 04:15:00.629855    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:00.629924    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:00.640101    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:00.640120    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:00.640126    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:00.663853    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:00.663860    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:00.675599    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:00.675612    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:00.687791    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:00.687801    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:00.701744    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:00.701756    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:00.713743    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:00.713756    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:00.732034    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:00.732044    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:00.746822    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:00.746837    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:00.751835    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:00.751844    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:00.789908    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:00.789919    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:00.807915    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:00.807927    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:00.822140    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:00.822153    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:00.833841    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:00.833854    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:00.868675    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:00.868682    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:00.880682    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:00.880692    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:03.393044    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:08.395658    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:08.395837    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:08.411791    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:08.411879    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:08.426230    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:08.426316    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:08.437243    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:08.437327    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:08.450944    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:08.451029    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:08.462412    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:08.462516    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:08.474876    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:08.474951    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:08.486262    5073 logs.go:276] 0 containers: []
	W0930 04:15:08.486275    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:08.486345    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:08.497910    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:08.497934    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:08.497940    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:08.513335    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:08.513348    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:08.526794    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:08.526810    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:08.543217    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:08.543228    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:08.556094    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:08.556107    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:08.592918    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:08.592937    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:08.597888    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:08.597899    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:08.634464    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:08.634476    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:08.650125    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:08.650140    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:08.663276    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:08.663286    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:08.680082    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:08.680098    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:08.694790    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:08.694803    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:08.714840    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:08.714852    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:08.727071    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:08.727081    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:08.751904    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:08.751919    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:11.267910    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:16.270111    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:16.270634    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:16.310625    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:16.310805    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:16.327972    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:16.328074    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:16.342090    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:16.342184    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:16.353966    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:16.354041    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:16.364658    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:16.364728    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:16.375048    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:16.375126    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:16.390161    5073 logs.go:276] 0 containers: []
	W0930 04:15:16.390175    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:16.390246    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:16.400130    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:16.400147    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:16.400153    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:16.414732    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:16.414743    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:16.426775    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:16.426784    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:16.438057    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:16.438068    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:16.459749    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:16.459762    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:16.464162    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:16.464169    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:16.478023    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:16.478034    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:16.489286    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:16.489298    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:16.504160    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:16.504171    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:16.515107    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:16.515116    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:16.548800    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:16.548806    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:16.582231    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:16.582246    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:16.594080    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:16.594088    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:16.605486    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:16.605494    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:16.630475    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:16.630483    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:19.143917    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:24.146747    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:24.147289    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:24.187897    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:24.188052    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:24.210457    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:24.210567    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:24.225877    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:24.225973    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:24.238071    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:24.238153    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:24.249161    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:24.249240    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:24.260057    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:24.260136    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:24.270928    5073 logs.go:276] 0 containers: []
	W0930 04:15:24.270940    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:24.271005    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:24.281232    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:24.281247    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:24.281252    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:24.300402    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:24.300414    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:24.311824    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:24.311837    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:24.322901    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:24.322915    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:24.335464    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:24.335476    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:24.340080    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:24.340087    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:24.375017    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:24.375031    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:24.389364    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:24.389376    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:24.402949    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:24.402960    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:24.414572    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:24.414582    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:24.426197    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:24.426208    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:24.461159    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:24.461170    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:24.475729    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:24.475745    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:24.487529    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:24.487543    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:24.512455    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:24.512465    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:27.024104    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:32.024560    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:32.025083    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:32.055006    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:32.055158    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:32.073402    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:32.073511    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:32.091279    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:32.091383    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:32.103075    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:32.103147    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:32.113481    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:32.113553    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:32.129924    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:32.129997    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:32.140880    5073 logs.go:276] 0 containers: []
	W0930 04:15:32.140893    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:32.140960    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:32.151457    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:32.151475    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:32.151481    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:32.186370    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:32.186383    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:32.202537    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:32.202548    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:32.214274    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:32.214288    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:32.226008    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:32.226023    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:32.238185    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:32.238199    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:32.249882    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:32.249894    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:32.265426    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:32.265439    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:32.278328    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:32.278341    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:32.290681    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:32.290695    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:32.301799    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:32.301811    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:32.336869    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:32.336879    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:32.340995    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:32.341002    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:32.355389    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:32.355399    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:32.372249    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:32.372264    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:34.897712    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:39.899918    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:39.900404    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:39.934179    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:39.934322    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:39.957869    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:39.957969    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:39.971753    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:39.971842    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:39.983244    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:39.983317    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:39.994148    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:39.994229    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:40.004534    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:40.004602    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:40.014681    5073 logs.go:276] 0 containers: []
	W0930 04:15:40.014691    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:40.014758    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:40.025102    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:40.025122    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:40.025128    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:40.060870    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:40.060881    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:40.078629    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:40.078639    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:40.091378    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:40.091387    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:40.102645    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:40.102655    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:40.139417    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:40.139436    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:40.162944    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:40.162955    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:40.175112    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:40.175125    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:40.179646    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:40.179655    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:40.191446    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:40.191459    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:40.202929    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:40.202939    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:40.217503    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:40.217515    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:40.234973    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:40.234986    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:40.246356    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:40.246371    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:40.259092    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:40.259102    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:42.782249    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:47.784450    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:47.784758    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:47.807891    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:47.808031    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:47.824468    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:47.824564    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:47.838078    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:47.838174    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:47.848767    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:47.848847    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:47.858982    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:47.859060    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:47.869125    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:47.869192    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:47.883666    5073 logs.go:276] 0 containers: []
	W0930 04:15:47.883677    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:47.883759    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:47.894143    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:47.894161    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:47.894169    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:47.918656    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:47.918663    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:47.952805    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:47.952820    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:47.964509    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:47.964519    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:47.976483    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:47.976493    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:47.997589    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:47.997599    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:48.001864    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:48.001872    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:48.013696    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:48.013707    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:48.028224    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:48.028234    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:48.040156    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:48.040167    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:48.051712    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:48.051721    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:48.063378    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:48.063393    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:48.077350    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:48.077360    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:48.088669    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:48.088679    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:48.122174    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:48.122183    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:50.644502    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:15:55.646929    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:15:55.647223    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0930 04:15:55.672537    5073 logs.go:276] 1 containers: [b12a86631e54]
	I0930 04:15:55.672667    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0930 04:15:55.689544    5073 logs.go:276] 1 containers: [c36fc47de11b]
	I0930 04:15:55.689639    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0930 04:15:55.708819    5073 logs.go:276] 4 containers: [8d7a5aa53a70 760ac328526b d2a057a51189 fc29cb8eea7a]
	I0930 04:15:55.708906    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0930 04:15:55.719588    5073 logs.go:276] 1 containers: [8c2543425d7f]
	I0930 04:15:55.719669    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0930 04:15:55.733331    5073 logs.go:276] 1 containers: [2181da380ab5]
	I0930 04:15:55.733412    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0930 04:15:55.749072    5073 logs.go:276] 1 containers: [3e19f354d734]
	I0930 04:15:55.749154    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0930 04:15:55.759406    5073 logs.go:276] 0 containers: []
	W0930 04:15:55.759415    5073 logs.go:278] No container was found matching "kindnet"
	I0930 04:15:55.759477    5073 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0930 04:15:55.770102    5073 logs.go:276] 1 containers: [74695c7fad09]
	I0930 04:15:55.770119    5073 logs.go:123] Gathering logs for kubelet ...
	I0930 04:15:55.770124    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 04:15:55.805071    5073 logs.go:123] Gathering logs for kube-proxy [2181da380ab5] ...
	I0930 04:15:55.805078    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2181da380ab5"
	I0930 04:15:55.819464    5073 logs.go:123] Gathering logs for describe nodes ...
	I0930 04:15:55.819475    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 04:15:55.854970    5073 logs.go:123] Gathering logs for kube-apiserver [b12a86631e54] ...
	I0930 04:15:55.854982    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b12a86631e54"
	I0930 04:15:55.874298    5073 logs.go:123] Gathering logs for kube-scheduler [8c2543425d7f] ...
	I0930 04:15:55.874308    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c2543425d7f"
	I0930 04:15:55.888969    5073 logs.go:123] Gathering logs for kube-controller-manager [3e19f354d734] ...
	I0930 04:15:55.888978    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e19f354d734"
	I0930 04:15:55.908697    5073 logs.go:123] Gathering logs for coredns [d2a057a51189] ...
	I0930 04:15:55.908705    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2a057a51189"
	I0930 04:15:55.921429    5073 logs.go:123] Gathering logs for Docker ...
	I0930 04:15:55.921439    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0930 04:15:55.943799    5073 logs.go:123] Gathering logs for container status ...
	I0930 04:15:55.943808    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 04:15:55.955118    5073 logs.go:123] Gathering logs for storage-provisioner [74695c7fad09] ...
	I0930 04:15:55.955130    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74695c7fad09"
	I0930 04:15:55.967010    5073 logs.go:123] Gathering logs for dmesg ...
	I0930 04:15:55.967022    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 04:15:55.971855    5073 logs.go:123] Gathering logs for etcd [c36fc47de11b] ...
	I0930 04:15:55.971863    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36fc47de11b"
	I0930 04:15:55.985879    5073 logs.go:123] Gathering logs for coredns [8d7a5aa53a70] ...
	I0930 04:15:55.985890    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d7a5aa53a70"
	I0930 04:15:55.996929    5073 logs.go:123] Gathering logs for coredns [760ac328526b] ...
	I0930 04:15:55.996941    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760ac328526b"
	I0930 04:15:56.008310    5073 logs.go:123] Gathering logs for coredns [fc29cb8eea7a] ...
	I0930 04:15:56.008322    5073 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc29cb8eea7a"
	I0930 04:15:58.520078    5073 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0930 04:16:03.522323    5073 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0930 04:16:03.527588    5073 out.go:201] 
	W0930 04:16:03.531619    5073 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0930 04:16:03.531655    5073 out.go:270] * 
	* 
	W0930 04:16:03.534210    5073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:03.549588    5073 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-312000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (581.02s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-528000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0930 04:13:21.329307    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-528000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.892652666s)

                                                
                                                
-- stdout --
	* [pause-528000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-528000" primary control-plane node in "pause-528000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-528000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-528000 -n pause-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-528000 -n pause-528000: exit status 7 (64.093708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 : exit status 80 (9.792385334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-953000" primary control-plane node in "NoKubernetes-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000: exit status 7 (65.811208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 : exit status 80 (5.77866425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-953000
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000: exit status 7 (45.085917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 : exit status 80 (5.784538167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-953000
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000: exit status 7 (53.326167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 : exit status 80 (5.814719959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-953000
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-953000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-953000 -n NoKubernetes-953000: exit status 7 (61.25125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.970793708s)

                                                
                                                
-- stdout --
	* [auto-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-962000" primary control-plane node in "auto-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:14:34.132854    5308 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:14:34.132976    5308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:34.132979    5308 out.go:358] Setting ErrFile to fd 2...
	I0930 04:14:34.132981    5308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:34.133127    5308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:14:34.134211    5308 out.go:352] Setting JSON to false
	I0930 04:14:34.150718    5308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4437,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:14:34.150794    5308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:14:34.158090    5308 out.go:177] * [auto-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:14:34.166084    5308 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:14:34.166154    5308 notify.go:220] Checking for updates...
	I0930 04:14:34.173028    5308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:14:34.176043    5308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:14:34.180035    5308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:14:34.183045    5308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:14:34.186035    5308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:14:34.189369    5308 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:14:34.189430    5308 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:14:34.189471    5308 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:14:34.193942    5308 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:14:34.200985    5308 start.go:297] selected driver: qemu2
	I0930 04:14:34.200990    5308 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:14:34.200995    5308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:14:34.203061    5308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:14:34.206035    5308 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:14:34.209124    5308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:14:34.209145    5308 cni.go:84] Creating CNI manager for ""
	I0930 04:14:34.209187    5308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:14:34.209196    5308 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:14:34.209221    5308 start.go:340] cluster config:
	{Name:auto-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:14:34.212794    5308 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:14:34.221990    5308 out.go:177] * Starting "auto-962000" primary control-plane node in "auto-962000" cluster
	I0930 04:14:34.226036    5308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:14:34.226057    5308 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:14:34.226073    5308 cache.go:56] Caching tarball of preloaded images
	I0930 04:14:34.226142    5308 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:14:34.226147    5308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:14:34.226219    5308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/auto-962000/config.json ...
	I0930 04:14:34.226231    5308 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/auto-962000/config.json: {Name:mked232de31e4778014ac0eba8ff5b1bdfbb029d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:14:34.226659    5308 start.go:360] acquireMachinesLock for auto-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:14:34.226696    5308 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "auto-962000"
	I0930 04:14:34.226707    5308 start.go:93] Provisioning new machine with config: &{Name:auto-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:14:34.226743    5308 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:14:34.235007    5308 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:14:34.252261    5308 start.go:159] libmachine.API.Create for "auto-962000" (driver="qemu2")
	I0930 04:14:34.252288    5308 client.go:168] LocalClient.Create starting
	I0930 04:14:34.252347    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:14:34.252377    5308 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:34.252387    5308 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:34.252428    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:14:34.252450    5308 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:34.252459    5308 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:34.252842    5308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:14:34.413949    5308 main.go:141] libmachine: Creating SSH key...
	I0930 04:14:34.593975    5308 main.go:141] libmachine: Creating Disk image...
	I0930 04:14:34.593985    5308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:14:34.594216    5308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:34.604089    5308 main.go:141] libmachine: STDOUT: 
	I0930 04:14:34.604108    5308 main.go:141] libmachine: STDERR: 
	I0930 04:14:34.604161    5308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2 +20000M
	I0930 04:14:34.612371    5308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:14:34.612387    5308 main.go:141] libmachine: STDERR: 
	I0930 04:14:34.612400    5308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:34.612410    5308 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:14:34.612423    5308 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:14:34.612450    5308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:61:76:38:f6:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:34.614085    5308 main.go:141] libmachine: STDOUT: 
	I0930 04:14:34.614103    5308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:14:34.614124    5308 client.go:171] duration metric: took 361.835834ms to LocalClient.Create
	I0930 04:14:36.616426    5308 start.go:128] duration metric: took 2.389649084s to createHost
	I0930 04:14:36.616547    5308 start.go:83] releasing machines lock for "auto-962000", held for 2.389882375s
	W0930 04:14:36.616616    5308 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:36.627855    5308 out.go:177] * Deleting "auto-962000" in qemu2 ...
	W0930 04:14:36.672130    5308 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:36.672161    5308 start.go:729] Will try again in 5 seconds ...
	I0930 04:14:41.674259    5308 start.go:360] acquireMachinesLock for auto-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:14:41.674854    5308 start.go:364] duration metric: took 481.917µs to acquireMachinesLock for "auto-962000"
	I0930 04:14:41.674966    5308 start.go:93] Provisioning new machine with config: &{Name:auto-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:14:41.675304    5308 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:14:41.680228    5308 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:14:41.728652    5308 start.go:159] libmachine.API.Create for "auto-962000" (driver="qemu2")
	I0930 04:14:41.728711    5308 client.go:168] LocalClient.Create starting
	I0930 04:14:41.728849    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:14:41.728923    5308 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:41.728939    5308 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:41.729003    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:14:41.729057    5308 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:41.729074    5308 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:41.729790    5308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:14:41.896028    5308 main.go:141] libmachine: Creating SSH key...
	I0930 04:14:41.999801    5308 main.go:141] libmachine: Creating Disk image...
	I0930 04:14:41.999813    5308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:14:42.000041    5308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:42.009378    5308 main.go:141] libmachine: STDOUT: 
	I0930 04:14:42.009400    5308 main.go:141] libmachine: STDERR: 
	I0930 04:14:42.009471    5308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2 +20000M
	I0930 04:14:42.017826    5308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:14:42.017840    5308 main.go:141] libmachine: STDERR: 
	I0930 04:14:42.017855    5308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:42.017863    5308 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:14:42.017873    5308 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:14:42.017907    5308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:6f:dc:8c:ee:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/auto-962000/disk.qcow2
	I0930 04:14:42.019674    5308 main.go:141] libmachine: STDOUT: 
	I0930 04:14:42.019715    5308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:14:42.019731    5308 client.go:171] duration metric: took 291.02075ms to LocalClient.Create
	I0930 04:14:44.021902    5308 start.go:128] duration metric: took 2.346569042s to createHost
	I0930 04:14:44.022034    5308 start.go:83] releasing machines lock for "auto-962000", held for 2.347175166s
	W0930 04:14:44.022413    5308 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:44.039068    5308 out.go:201] 
	W0930 04:14:44.043138    5308 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:14:44.043212    5308 out.go:270] * 
	* 
	W0930 04:14:44.045788    5308 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:14:44.062097    5308 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.894843042s)

                                                
                                                
-- stdout --
	* [calico-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-962000" primary control-plane node in "calico-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:14:46.280613    5420 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:14:46.280772    5420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:46.280776    5420 out.go:358] Setting ErrFile to fd 2...
	I0930 04:14:46.280779    5420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:46.280923    5420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:14:46.282198    5420 out.go:352] Setting JSON to false
	I0930 04:14:46.300092    5420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4449,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:14:46.300176    5420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:14:46.308170    5420 out.go:177] * [calico-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:14:46.316078    5420 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:14:46.316132    5420 notify.go:220] Checking for updates...
	I0930 04:14:46.324084    5420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:14:46.327201    5420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:14:46.330035    5420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:14:46.333068    5420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:14:46.336093    5420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:14:46.337979    5420 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:14:46.338053    5420 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:14:46.338110    5420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:14:46.342095    5420 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:14:46.348962    5420 start.go:297] selected driver: qemu2
	I0930 04:14:46.348969    5420 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:14:46.348976    5420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:14:46.351435    5420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:14:46.355050    5420 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:14:46.358191    5420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:14:46.358222    5420 cni.go:84] Creating CNI manager for "calico"
	I0930 04:14:46.358226    5420 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0930 04:14:46.358261    5420 start.go:340] cluster config:
	{Name:calico-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:14:46.362159    5420 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:14:46.369097    5420 out.go:177] * Starting "calico-962000" primary control-plane node in "calico-962000" cluster
	I0930 04:14:46.373121    5420 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:14:46.373139    5420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:14:46.373152    5420 cache.go:56] Caching tarball of preloaded images
	I0930 04:14:46.373280    5420 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:14:46.373295    5420 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:14:46.373365    5420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/calico-962000/config.json ...
	I0930 04:14:46.373378    5420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/calico-962000/config.json: {Name:mk7e308c701fd5140e1efa709b01fbabcd32bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:14:46.373708    5420 start.go:360] acquireMachinesLock for calico-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:14:46.373750    5420 start.go:364] duration metric: took 36.125µs to acquireMachinesLock for "calico-962000"
	I0930 04:14:46.373763    5420 start.go:93] Provisioning new machine with config: &{Name:calico-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:14:46.373806    5420 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:14:46.377151    5420 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:14:46.395099    5420 start.go:159] libmachine.API.Create for "calico-962000" (driver="qemu2")
	I0930 04:14:46.395126    5420 client.go:168] LocalClient.Create starting
	I0930 04:14:46.395196    5420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:14:46.395225    5420 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:46.395235    5420 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:46.395280    5420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:14:46.395302    5420 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:46.395313    5420 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:46.395669    5420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:14:46.558689    5420 main.go:141] libmachine: Creating SSH key...
	I0930 04:14:46.718117    5420 main.go:141] libmachine: Creating Disk image...
	I0930 04:14:46.718126    5420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:14:46.718364    5420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:46.727853    5420 main.go:141] libmachine: STDOUT: 
	I0930 04:14:46.727871    5420 main.go:141] libmachine: STDERR: 
	I0930 04:14:46.727934    5420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2 +20000M
	I0930 04:14:46.735998    5420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:14:46.736023    5420 main.go:141] libmachine: STDERR: 
	I0930 04:14:46.736038    5420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:46.736042    5420 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:14:46.736050    5420 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:14:46.736080    5420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:22:c0:e9:22:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:46.737909    5420 main.go:141] libmachine: STDOUT: 
	I0930 04:14:46.737928    5420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:14:46.737948    5420 client.go:171] duration metric: took 342.822625ms to LocalClient.Create
	I0930 04:14:48.740192    5420 start.go:128] duration metric: took 2.366391166s to createHost
	I0930 04:14:48.740286    5420 start.go:83] releasing machines lock for "calico-962000", held for 2.366567041s
	W0930 04:14:48.740350    5420 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:48.757326    5420 out.go:177] * Deleting "calico-962000" in qemu2 ...
	W0930 04:14:48.784322    5420 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:48.784342    5420 start.go:729] Will try again in 5 seconds ...
	I0930 04:14:53.786391    5420 start.go:360] acquireMachinesLock for calico-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:14:53.786615    5420 start.go:364] duration metric: took 175.458µs to acquireMachinesLock for "calico-962000"
	I0930 04:14:53.786638    5420 start.go:93] Provisioning new machine with config: &{Name:calico-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:14:53.786683    5420 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:14:53.798963    5420 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:14:53.815929    5420 start.go:159] libmachine.API.Create for "calico-962000" (driver="qemu2")
	I0930 04:14:53.815958    5420 client.go:168] LocalClient.Create starting
	I0930 04:14:53.816037    5420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:14:53.816075    5420 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:53.816083    5420 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:53.816114    5420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:14:53.816141    5420 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:53.816151    5420 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:53.816445    5420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:14:53.977573    5420 main.go:141] libmachine: Creating SSH key...
	I0930 04:14:54.080244    5420 main.go:141] libmachine: Creating Disk image...
	I0930 04:14:54.080251    5420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:14:54.080461    5420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:54.089899    5420 main.go:141] libmachine: STDOUT: 
	I0930 04:14:54.089926    5420 main.go:141] libmachine: STDERR: 
	I0930 04:14:54.089981    5420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2 +20000M
	I0930 04:14:54.097930    5420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:14:54.097948    5420 main.go:141] libmachine: STDERR: 
	I0930 04:14:54.097960    5420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:54.097965    5420 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:14:54.097976    5420 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:14:54.097999    5420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a7:a3:53:9a:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/calico-962000/disk.qcow2
	I0930 04:14:54.099770    5420 main.go:141] libmachine: STDOUT: 
	I0930 04:14:54.099783    5420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:14:54.099796    5420 client.go:171] duration metric: took 283.840208ms to LocalClient.Create
	I0930 04:14:56.101948    5420 start.go:128] duration metric: took 2.315272625s to createHost
	I0930 04:14:56.102013    5420 start.go:83] releasing machines lock for "calico-962000", held for 2.315426709s
	W0930 04:14:56.102357    5420 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:14:56.111854    5420 out.go:201] 
	W0930 04:14:56.118868    5420 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:14:56.118891    5420 out.go:270] * 
	* 
	W0930 04:14:56.120453    5420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:14:56.129814    5420 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.880030833s)

                                                
                                                
-- stdout --
	* [custom-flannel-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-962000" primary control-plane node in "custom-flannel-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:14:58.511987    5540 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:14:58.512113    5540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:58.512116    5540 out.go:358] Setting ErrFile to fd 2...
	I0930 04:14:58.512125    5540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:14:58.512268    5540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:14:58.513367    5540 out.go:352] Setting JSON to false
	I0930 04:14:58.530014    5540 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4461,"bootTime":1727690437,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:14:58.530096    5540 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:14:58.535947    5540 out.go:177] * [custom-flannel-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:14:58.543875    5540 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:14:58.543935    5540 notify.go:220] Checking for updates...
	I0930 04:14:58.549747    5540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:14:58.552811    5540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:14:58.555832    5540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:14:58.558767    5540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:14:58.561765    5540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:14:58.565129    5540 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:14:58.565192    5540 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:14:58.565244    5540 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:14:58.569773    5540 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:14:58.576848    5540 start.go:297] selected driver: qemu2
	I0930 04:14:58.576853    5540 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:14:58.576859    5540 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:14:58.579040    5540 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:14:58.582757    5540 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:14:58.585878    5540 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:14:58.585903    5540 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0930 04:14:58.585919    5540 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0930 04:14:58.585946    5540 start.go:340] cluster config:
	{Name:custom-flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:14:58.589395    5540 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:14:58.591477    5540 out.go:177] * Starting "custom-flannel-962000" primary control-plane node in "custom-flannel-962000" cluster
	I0930 04:14:58.599819    5540 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:14:58.599834    5540 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:14:58.599845    5540 cache.go:56] Caching tarball of preloaded images
	I0930 04:14:58.599908    5540 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:14:58.599914    5540 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:14:58.599977    5540 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/custom-flannel-962000/config.json ...
	I0930 04:14:58.599987    5540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/custom-flannel-962000/config.json: {Name:mk075304111a6853b48d82ff12c33cb4962dbfe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:14:58.600194    5540 start.go:360] acquireMachinesLock for custom-flannel-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:14:58.600234    5540 start.go:364] duration metric: took 30.709µs to acquireMachinesLock for "custom-flannel-962000"
	I0930 04:14:58.600245    5540 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:14:58.600271    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:14:58.608817    5540 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:14:58.625440    5540 start.go:159] libmachine.API.Create for "custom-flannel-962000" (driver="qemu2")
	I0930 04:14:58.625465    5540 client.go:168] LocalClient.Create starting
	I0930 04:14:58.625529    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:14:58.625559    5540 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:58.625568    5540 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:58.625613    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:14:58.625638    5540 main.go:141] libmachine: Decoding PEM data...
	I0930 04:14:58.625645    5540 main.go:141] libmachine: Parsing certificate...
	I0930 04:14:58.626100    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:14:58.786529    5540 main.go:141] libmachine: Creating SSH key...
	I0930 04:14:58.920535    5540 main.go:141] libmachine: Creating Disk image...
	I0930 04:14:58.920547    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:14:58.920777    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:14:58.930311    5540 main.go:141] libmachine: STDOUT: 
	I0930 04:14:58.930328    5540 main.go:141] libmachine: STDERR: 
	I0930 04:14:58.930396    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2 +20000M
	I0930 04:14:58.938341    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:14:58.938365    5540 main.go:141] libmachine: STDERR: 
	I0930 04:14:58.938381    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:14:58.938387    5540 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:14:58.938399    5540 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:14:58.938427    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:58:f3:5f:8d:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:14:58.940099    5540 main.go:141] libmachine: STDOUT: 
	I0930 04:14:58.940115    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:14:58.940135    5540 client.go:171] duration metric: took 314.670667ms to LocalClient.Create
	I0930 04:15:00.942215    5540 start.go:128] duration metric: took 2.341976583s to createHost
	I0930 04:15:00.942228    5540 start.go:83] releasing machines lock for "custom-flannel-962000", held for 2.342030333s
	W0930 04:15:00.942244    5540 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:00.946698    5540 out.go:177] * Deleting "custom-flannel-962000" in qemu2 ...
	W0930 04:15:00.962294    5540 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:00.962300    5540 start.go:729] Will try again in 5 seconds ...
	I0930 04:15:05.964335    5540 start.go:360] acquireMachinesLock for custom-flannel-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:05.964588    5540 start.go:364] duration metric: took 211.458µs to acquireMachinesLock for "custom-flannel-962000"
	I0930 04:15:05.964673    5540 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:05.964819    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:05.984233    5540 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:06.024410    5540 start.go:159] libmachine.API.Create for "custom-flannel-962000" (driver="qemu2")
	I0930 04:15:06.024456    5540 client.go:168] LocalClient.Create starting
	I0930 04:15:06.024581    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:06.024647    5540 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:06.024670    5540 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:06.024729    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:06.024770    5540 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:06.024783    5540 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:06.025480    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:06.194494    5540 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:06.291410    5540 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:06.291422    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:06.291657    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:15:06.301014    5540 main.go:141] libmachine: STDOUT: 
	I0930 04:15:06.301033    5540 main.go:141] libmachine: STDERR: 
	I0930 04:15:06.301105    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2 +20000M
	I0930 04:15:06.309215    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:06.309232    5540 main.go:141] libmachine: STDERR: 
	I0930 04:15:06.309246    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:15:06.309252    5540 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:06.309263    5540 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:06.309288    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:1c:b7:6f:4a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/custom-flannel-962000/disk.qcow2
	I0930 04:15:06.310980    5540 main.go:141] libmachine: STDOUT: 
	I0930 04:15:06.310996    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:06.311018    5540 client.go:171] duration metric: took 286.561958ms to LocalClient.Create
	I0930 04:15:08.313229    5540 start.go:128] duration metric: took 2.348355459s to createHost
	I0930 04:15:08.313297    5540 start.go:83] releasing machines lock for "custom-flannel-962000", held for 2.348733834s
	W0930 04:15:08.313619    5540 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:08.330280    5540 out.go:201] 
	W0930 04:15:08.335259    5540 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:15:08.335290    5540 out.go:270] * 
	* 
	W0930 04:15:08.336743    5540 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:15:08.354929    5540 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0930 04:15:18.232484    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.812453667s)

                                                
                                                
-- stdout --
	* [false-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-962000" primary control-plane node in "false-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:15:10.793257    5660 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:15:10.793401    5660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:10.793408    5660 out.go:358] Setting ErrFile to fd 2...
	I0930 04:15:10.793410    5660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:10.793554    5660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:15:10.794744    5660 out.go:352] Setting JSON to false
	I0930 04:15:10.811854    5660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4473,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:15:10.811934    5660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:15:10.820665    5660 out.go:177] * [false-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:15:10.829606    5660 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:15:10.829660    5660 notify.go:220] Checking for updates...
	I0930 04:15:10.837644    5660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:15:10.840593    5660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:15:10.843673    5660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:15:10.846698    5660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:15:10.848197    5660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:15:10.851955    5660 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:15:10.852024    5660 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:15:10.852077    5660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:15:10.856676    5660 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:15:10.862621    5660 start.go:297] selected driver: qemu2
	I0930 04:15:10.862627    5660 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:15:10.862637    5660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:15:10.864779    5660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:15:10.867638    5660 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:15:10.870729    5660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:15:10.870743    5660 cni.go:84] Creating CNI manager for "false"
	I0930 04:15:10.870772    5660 start.go:340] cluster config:
	{Name:false-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:15:10.874210    5660 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:15:10.881691    5660 out.go:177] * Starting "false-962000" primary control-plane node in "false-962000" cluster
	I0930 04:15:10.885595    5660 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:15:10.885624    5660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:15:10.885637    5660 cache.go:56] Caching tarball of preloaded images
	I0930 04:15:10.885723    5660 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:15:10.885728    5660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:15:10.885798    5660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/false-962000/config.json ...
	I0930 04:15:10.885807    5660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/false-962000/config.json: {Name:mk457b7cf6310f7916c8a3df7fba738172c42b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:15:10.886016    5660 start.go:360] acquireMachinesLock for false-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:10.886046    5660 start.go:364] duration metric: took 25.166µs to acquireMachinesLock for "false-962000"
	I0930 04:15:10.886060    5660 start.go:93] Provisioning new machine with config: &{Name:false-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:10.886097    5660 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:10.894742    5660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:10.910178    5660 start.go:159] libmachine.API.Create for "false-962000" (driver="qemu2")
	I0930 04:15:10.910213    5660 client.go:168] LocalClient.Create starting
	I0930 04:15:10.910287    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:10.910319    5660 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:10.910328    5660 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:10.910370    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:10.910393    5660 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:10.910404    5660 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:10.910751    5660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:11.070625    5660 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:11.160072    5660 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:11.160078    5660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:11.160277    5660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:11.169558    5660 main.go:141] libmachine: STDOUT: 
	I0930 04:15:11.169572    5660 main.go:141] libmachine: STDERR: 
	I0930 04:15:11.169632    5660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2 +20000M
	I0930 04:15:11.177632    5660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:11.177651    5660 main.go:141] libmachine: STDERR: 
	I0930 04:15:11.177678    5660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:11.177684    5660 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:11.177696    5660 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:11.177724    5660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:95:a4:9e:46:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:11.179461    5660 main.go:141] libmachine: STDOUT: 
	I0930 04:15:11.179482    5660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:11.179504    5660 client.go:171] duration metric: took 269.288875ms to LocalClient.Create
	I0930 04:15:13.181029    5660 start.go:128] duration metric: took 2.294915667s to createHost
	I0930 04:15:13.181124    5660 start.go:83] releasing machines lock for "false-962000", held for 2.295107791s
	W0930 04:15:13.181187    5660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:13.198444    5660 out.go:177] * Deleting "false-962000" in qemu2 ...
	W0930 04:15:13.237922    5660 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:13.237953    5660 start.go:729] Will try again in 5 seconds ...
	I0930 04:15:18.240371    5660 start.go:360] acquireMachinesLock for false-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:18.240775    5660 start.go:364] duration metric: took 325.25µs to acquireMachinesLock for "false-962000"
	I0930 04:15:18.240849    5660 start.go:93] Provisioning new machine with config: &{Name:false-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:18.241054    5660 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:18.248185    5660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:18.293696    5660 start.go:159] libmachine.API.Create for "false-962000" (driver="qemu2")
	I0930 04:15:18.293753    5660 client.go:168] LocalClient.Create starting
	I0930 04:15:18.293874    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:18.293942    5660 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:18.293954    5660 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:18.294009    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:18.294048    5660 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:18.294057    5660 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:18.294564    5660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:18.457003    5660 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:18.496862    5660 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:18.496871    5660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:18.497081    5660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:18.506403    5660 main.go:141] libmachine: STDOUT: 
	I0930 04:15:18.506512    5660 main.go:141] libmachine: STDERR: 
	I0930 04:15:18.506574    5660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2 +20000M
	I0930 04:15:18.514552    5660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:18.514573    5660 main.go:141] libmachine: STDERR: 
	I0930 04:15:18.514584    5660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:18.514590    5660 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:18.514598    5660 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:18.514636    5660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:98:b0:0f:a5:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/false-962000/disk.qcow2
	I0930 04:15:18.516360    5660 main.go:141] libmachine: STDOUT: 
	I0930 04:15:18.516414    5660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:18.516429    5660 client.go:171] duration metric: took 222.675333ms to LocalClient.Create
	I0930 04:15:20.518614    5660 start.go:128] duration metric: took 2.2775485s to createHost
	I0930 04:15:20.518693    5660 start.go:83] releasing machines lock for "false-962000", held for 2.277937083s
	W0930 04:15:20.519097    5660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:20.537957    5660 out.go:201] 
	W0930 04:15:20.544507    5660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:15:20.544542    5660 out.go:270] * 
	* 
	W0930 04:15:20.547030    5660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:15:20.565830    5660 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0930 04:15:32.383243    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.844888334s)

                                                
                                                
-- stdout --
	* [kindnet-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-962000" primary control-plane node in "kindnet-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:15:22.802204    5774 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:15:22.802348    5774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:22.802351    5774 out.go:358] Setting ErrFile to fd 2...
	I0930 04:15:22.802353    5774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:22.802495    5774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:15:22.803619    5774 out.go:352] Setting JSON to false
	I0930 04:15:22.819951    5774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4485,"bootTime":1727690437,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:15:22.820021    5774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:15:22.826723    5774 out.go:177] * [kindnet-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:15:22.835548    5774 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:15:22.835580    5774 notify.go:220] Checking for updates...
	I0930 04:15:22.842515    5774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:15:22.845536    5774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:15:22.849428    5774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:15:22.852494    5774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:15:22.855504    5774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:15:22.858710    5774 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:15:22.858770    5774 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:15:22.858819    5774 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:15:22.863499    5774 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:15:22.869509    5774 start.go:297] selected driver: qemu2
	I0930 04:15:22.869514    5774 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:15:22.869519    5774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:15:22.871660    5774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:15:22.875525    5774 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:15:22.878613    5774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:15:22.878643    5774 cni.go:84] Creating CNI manager for "kindnet"
	I0930 04:15:22.878650    5774 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 04:15:22.878681    5774 start.go:340] cluster config:
	{Name:kindnet-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:15:22.882196    5774 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:15:22.890523    5774 out.go:177] * Starting "kindnet-962000" primary control-plane node in "kindnet-962000" cluster
	I0930 04:15:22.894300    5774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:15:22.894318    5774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:15:22.894329    5774 cache.go:56] Caching tarball of preloaded images
	I0930 04:15:22.894406    5774 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:15:22.894412    5774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:15:22.894490    5774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kindnet-962000/config.json ...
	I0930 04:15:22.894501    5774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kindnet-962000/config.json: {Name:mkf7f45ae77307d3c26515506d45a1e2da12974a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:15:22.894720    5774 start.go:360] acquireMachinesLock for kindnet-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:22.894752    5774 start.go:364] duration metric: took 26.666µs to acquireMachinesLock for "kindnet-962000"
	I0930 04:15:22.894764    5774 start.go:93] Provisioning new machine with config: &{Name:kindnet-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:22.894797    5774 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:22.902512    5774 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:22.919302    5774 start.go:159] libmachine.API.Create for "kindnet-962000" (driver="qemu2")
	I0930 04:15:22.919345    5774 client.go:168] LocalClient.Create starting
	I0930 04:15:22.919407    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:22.919439    5774 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:22.919449    5774 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:22.919495    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:22.919517    5774 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:22.919526    5774 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:22.919945    5774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:23.081773    5774 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:23.214240    5774 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:23.214248    5774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:23.214458    5774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:23.223640    5774 main.go:141] libmachine: STDOUT: 
	I0930 04:15:23.223655    5774 main.go:141] libmachine: STDERR: 
	I0930 04:15:23.223715    5774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2 +20000M
	I0930 04:15:23.231741    5774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:23.231757    5774 main.go:141] libmachine: STDERR: 
	I0930 04:15:23.231775    5774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:23.231781    5774 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:23.231792    5774 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:23.231817    5774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:47:be:93:c9:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:23.233475    5774 main.go:141] libmachine: STDOUT: 
	I0930 04:15:23.233495    5774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:23.233514    5774 client.go:171] duration metric: took 314.169459ms to LocalClient.Create
	I0930 04:15:25.235774    5774 start.go:128] duration metric: took 2.340983833s to createHost
	I0930 04:15:25.235913    5774 start.go:83] releasing machines lock for "kindnet-962000", held for 2.341186375s
	W0930 04:15:25.236037    5774 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:25.255275    5774 out.go:177] * Deleting "kindnet-962000" in qemu2 ...
	W0930 04:15:25.289570    5774 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:25.289597    5774 start.go:729] Will try again in 5 seconds ...
	I0930 04:15:30.291635    5774 start.go:360] acquireMachinesLock for kindnet-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:30.291968    5774 start.go:364] duration metric: took 224.125µs to acquireMachinesLock for "kindnet-962000"
	I0930 04:15:30.292032    5774 start.go:93] Provisioning new machine with config: &{Name:kindnet-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:30.292146    5774 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:30.302489    5774 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:30.332185    5774 start.go:159] libmachine.API.Create for "kindnet-962000" (driver="qemu2")
	I0930 04:15:30.332236    5774 client.go:168] LocalClient.Create starting
	I0930 04:15:30.332363    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:30.332428    5774 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:30.332442    5774 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:30.332494    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:30.332532    5774 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:30.332541    5774 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:30.333158    5774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:30.495262    5774 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:30.550489    5774 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:30.550494    5774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:30.550715    5774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:30.559946    5774 main.go:141] libmachine: STDOUT: 
	I0930 04:15:30.559967    5774 main.go:141] libmachine: STDERR: 
	I0930 04:15:30.560022    5774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2 +20000M
	I0930 04:15:30.568279    5774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:30.568300    5774 main.go:141] libmachine: STDERR: 
	I0930 04:15:30.568317    5774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:30.568322    5774 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:30.568330    5774 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:30.568361    5774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b3:ac:1f:b8:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kindnet-962000/disk.qcow2
	I0930 04:15:30.570087    5774 main.go:141] libmachine: STDOUT: 
	I0930 04:15:30.570104    5774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:30.570117    5774 client.go:171] duration metric: took 237.864292ms to LocalClient.Create
	I0930 04:15:32.572179    5774 start.go:128] duration metric: took 2.280055459s to createHost
	I0930 04:15:32.572225    5774 start.go:83] releasing machines lock for "kindnet-962000", held for 2.280283208s
	W0930 04:15:32.572386    5774 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:32.579796    5774 out.go:201] 
	W0930 04:15:32.592792    5774 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:15:32.592808    5774 out.go:270] * 
	* 
	W0930 04:15:32.594389    5774 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:15:32.606809    5774 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.872828917s)

                                                
                                                
-- stdout --
	* [flannel-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-962000" primary control-plane node in "flannel-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:15:34.926192    5887 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:15:34.926302    5887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:34.926306    5887 out.go:358] Setting ErrFile to fd 2...
	I0930 04:15:34.926308    5887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:34.926432    5887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:15:34.927510    5887 out.go:352] Setting JSON to false
	I0930 04:15:34.944498    5887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4497,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:15:34.944565    5887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:15:34.950949    5887 out.go:177] * [flannel-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:15:34.958691    5887 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:15:34.958717    5887 notify.go:220] Checking for updates...
	I0930 04:15:34.966796    5887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:15:34.969865    5887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:15:34.973855    5887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:15:34.976919    5887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:15:34.980894    5887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:15:34.984277    5887 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:15:34.984350    5887 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:15:34.984399    5887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:15:34.988928    5887 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:15:34.995851    5887 start.go:297] selected driver: qemu2
	I0930 04:15:34.995858    5887 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:15:34.995864    5887 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:15:34.998085    5887 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:15:35.001878    5887 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:15:35.004891    5887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:15:35.004910    5887 cni.go:84] Creating CNI manager for "flannel"
	I0930 04:15:35.004914    5887 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0930 04:15:35.004948    5887 start.go:340] cluster config:
	{Name:flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:15:35.008753    5887 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:15:35.015890    5887 out.go:177] * Starting "flannel-962000" primary control-plane node in "flannel-962000" cluster
	I0930 04:15:35.019872    5887 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:15:35.019888    5887 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:15:35.019898    5887 cache.go:56] Caching tarball of preloaded images
	I0930 04:15:35.019952    5887 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:15:35.019957    5887 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:15:35.020022    5887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/flannel-962000/config.json ...
	I0930 04:15:35.020033    5887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/flannel-962000/config.json: {Name:mk7ff6e59a86aa68ce9971e2c34f4490935ea6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:15:35.020328    5887 start.go:360] acquireMachinesLock for flannel-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:35.020358    5887 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "flannel-962000"
	I0930 04:15:35.020368    5887 start.go:93] Provisioning new machine with config: &{Name:flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:35.020405    5887 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:35.024803    5887 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:35.040282    5887 start.go:159] libmachine.API.Create for "flannel-962000" (driver="qemu2")
	I0930 04:15:35.040314    5887 client.go:168] LocalClient.Create starting
	I0930 04:15:35.040383    5887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:35.040414    5887 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:35.040423    5887 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:35.040467    5887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:35.040490    5887 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:35.040498    5887 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:35.040938    5887 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:35.202884    5887 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:35.288678    5887 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:35.288689    5887 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:35.288910    5887 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:35.298047    5887 main.go:141] libmachine: STDOUT: 
	I0930 04:15:35.298067    5887 main.go:141] libmachine: STDERR: 
	I0930 04:15:35.298123    5887 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2 +20000M
	I0930 04:15:35.306011    5887 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:35.306026    5887 main.go:141] libmachine: STDERR: 
	I0930 04:15:35.306051    5887 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:35.306057    5887 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:35.306069    5887 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:35.306095    5887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:1c:86:05:5f:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:35.307756    5887 main.go:141] libmachine: STDOUT: 
	I0930 04:15:35.307771    5887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:35.307793    5887 client.go:171] duration metric: took 267.477458ms to LocalClient.Create
	I0930 04:15:37.310161    5887 start.go:128] duration metric: took 2.28975625s to createHost
	I0930 04:15:37.310257    5887 start.go:83] releasing machines lock for "flannel-962000", held for 2.289928s
	W0930 04:15:37.310336    5887 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:37.321384    5887 out.go:177] * Deleting "flannel-962000" in qemu2 ...
	W0930 04:15:37.365699    5887 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:37.365725    5887 start.go:729] Will try again in 5 seconds ...
	I0930 04:15:42.367748    5887 start.go:360] acquireMachinesLock for flannel-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:42.367981    5887 start.go:364] duration metric: took 186.333µs to acquireMachinesLock for "flannel-962000"
	I0930 04:15:42.368036    5887 start.go:93] Provisioning new machine with config: &{Name:flannel-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:42.368132    5887 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:42.377342    5887 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:42.406878    5887 start.go:159] libmachine.API.Create for "flannel-962000" (driver="qemu2")
	I0930 04:15:42.406929    5887 client.go:168] LocalClient.Create starting
	I0930 04:15:42.407024    5887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:42.407075    5887 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:42.407089    5887 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:42.407145    5887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:42.407178    5887 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:42.407188    5887 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:42.407613    5887 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:42.572613    5887 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:42.694453    5887 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:42.694460    5887 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:42.694651    5887 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:42.703906    5887 main.go:141] libmachine: STDOUT: 
	I0930 04:15:42.703927    5887 main.go:141] libmachine: STDERR: 
	I0930 04:15:42.703983    5887 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2 +20000M
	I0930 04:15:42.712152    5887 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:42.712169    5887 main.go:141] libmachine: STDERR: 
	I0930 04:15:42.712182    5887 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:42.712186    5887 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:42.712194    5887 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:42.712218    5887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:0b:8b:d1:ae:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/flannel-962000/disk.qcow2
	I0930 04:15:42.713994    5887 main.go:141] libmachine: STDOUT: 
	I0930 04:15:42.714008    5887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:42.714020    5887 client.go:171] duration metric: took 307.092166ms to LocalClient.Create
	I0930 04:15:44.716177    5887 start.go:128] duration metric: took 2.348054333s to createHost
	I0930 04:15:44.716245    5887 start.go:83] releasing machines lock for "flannel-962000", held for 2.348292292s
	W0930 04:15:44.716582    5887 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:44.732206    5887 out.go:201] 
	W0930 04:15:44.736300    5887 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:15:44.736328    5887 out.go:270] * 
	* 
	W0930 04:15:44.739148    5887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:15:44.757221    5887 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.760958334s)

                                                
                                                
-- stdout --
	* [enable-default-cni-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-962000" primary control-plane node in "enable-default-cni-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:15:47.157293    6007 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:15:47.157414    6007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:47.157417    6007 out.go:358] Setting ErrFile to fd 2...
	I0930 04:15:47.157419    6007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:47.157572    6007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:15:47.158669    6007 out.go:352] Setting JSON to false
	I0930 04:15:47.175023    6007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4510,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:15:47.175091    6007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:15:47.182576    6007 out.go:177] * [enable-default-cni-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:15:47.190605    6007 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:15:47.190661    6007 notify.go:220] Checking for updates...
	I0930 04:15:47.198547    6007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:15:47.201517    6007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:15:47.204522    6007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:15:47.207523    6007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:15:47.210515    6007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:15:47.212325    6007 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:15:47.212393    6007 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:15:47.212441    6007 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:15:47.216503    6007 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:15:47.223299    6007 start.go:297] selected driver: qemu2
	I0930 04:15:47.223304    6007 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:15:47.223309    6007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:15:47.225421    6007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:15:47.229529    6007 out.go:177] * Automatically selected the socket_vmnet network
	E0930 04:15:47.232675    6007 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0930 04:15:47.232688    6007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:15:47.232704    6007 cni.go:84] Creating CNI manager for "bridge"
	I0930 04:15:47.232708    6007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:15:47.232735    6007 start.go:340] cluster config:
	{Name:enable-default-cni-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:15:47.236226    6007 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:15:47.244480    6007 out.go:177] * Starting "enable-default-cni-962000" primary control-plane node in "enable-default-cni-962000" cluster
	I0930 04:15:47.248508    6007 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:15:47.248523    6007 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:15:47.248534    6007 cache.go:56] Caching tarball of preloaded images
	I0930 04:15:47.248605    6007 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:15:47.248610    6007 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:15:47.248678    6007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/enable-default-cni-962000/config.json ...
	I0930 04:15:47.248689    6007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/enable-default-cni-962000/config.json: {Name:mkde5e49664d49b930ef29071e8c115552ad62e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:15:47.249102    6007 start.go:360] acquireMachinesLock for enable-default-cni-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:47.249135    6007 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "enable-default-cni-962000"
	I0930 04:15:47.249146    6007 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:47.249172    6007 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:47.252600    6007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:47.269014    6007 start.go:159] libmachine.API.Create for "enable-default-cni-962000" (driver="qemu2")
	I0930 04:15:47.269056    6007 client.go:168] LocalClient.Create starting
	I0930 04:15:47.269127    6007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:47.269162    6007 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:47.269171    6007 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:47.269210    6007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:47.269233    6007 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:47.269240    6007 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:47.269634    6007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:47.428526    6007 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:47.467386    6007 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:47.467391    6007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:47.467616    6007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:47.476974    6007 main.go:141] libmachine: STDOUT: 
	I0930 04:15:47.476996    6007 main.go:141] libmachine: STDERR: 
	I0930 04:15:47.477052    6007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2 +20000M
	I0930 04:15:47.485509    6007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:47.485525    6007 main.go:141] libmachine: STDERR: 
	I0930 04:15:47.485539    6007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:47.485546    6007 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:47.485557    6007 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:47.485586    6007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b0:b7:5d:e7:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:47.487395    6007 main.go:141] libmachine: STDOUT: 
	I0930 04:15:47.487431    6007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:47.487455    6007 client.go:171] duration metric: took 218.395416ms to LocalClient.Create
	I0930 04:15:49.489656    6007 start.go:128] duration metric: took 2.240498083s to createHost
	I0930 04:15:49.489744    6007 start.go:83] releasing machines lock for "enable-default-cni-962000", held for 2.240637333s
	W0930 04:15:49.489813    6007 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:49.507111    6007 out.go:177] * Deleting "enable-default-cni-962000" in qemu2 ...
	W0930 04:15:49.539780    6007 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:49.539816    6007 start.go:729] Will try again in 5 seconds ...
	I0930 04:15:54.541834    6007 start.go:360] acquireMachinesLock for enable-default-cni-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:54.542090    6007 start.go:364] duration metric: took 219.5µs to acquireMachinesLock for "enable-default-cni-962000"
	I0930 04:15:54.542160    6007 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:54.542364    6007 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:54.552766    6007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:54.583812    6007 start.go:159] libmachine.API.Create for "enable-default-cni-962000" (driver="qemu2")
	I0930 04:15:54.583878    6007 client.go:168] LocalClient.Create starting
	I0930 04:15:54.584018    6007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:54.584103    6007 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:54.584116    6007 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:54.584172    6007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:54.584208    6007 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:54.584216    6007 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:54.584679    6007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:54.749172    6007 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:54.823461    6007 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:54.823475    6007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:54.823703    6007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:54.832995    6007 main.go:141] libmachine: STDOUT: 
	I0930 04:15:54.833012    6007 main.go:141] libmachine: STDERR: 
	I0930 04:15:54.833064    6007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2 +20000M
	I0930 04:15:54.840847    6007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:54.840869    6007 main.go:141] libmachine: STDERR: 
	I0930 04:15:54.840883    6007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:54.840887    6007 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:54.840894    6007 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:54.840921    6007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:bf:c3:21:71:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/enable-default-cni-962000/disk.qcow2
	I0930 04:15:54.842585    6007 main.go:141] libmachine: STDOUT: 
	I0930 04:15:54.842599    6007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:54.842613    6007 client.go:171] duration metric: took 258.727333ms to LocalClient.Create
	I0930 04:15:56.844794    6007 start.go:128] duration metric: took 2.302434625s to createHost
	I0930 04:15:56.844873    6007 start.go:83] releasing machines lock for "enable-default-cni-962000", held for 2.302809209s
	W0930 04:15:56.845182    6007 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:15:56.854770    6007 out.go:201] 
	W0930 04:15:56.863677    6007 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:15:56.863697    6007 out.go:270] * 
	* 
	W0930 04:15:56.865719    6007 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:15:56.880694    6007 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.876038875s)

                                                
                                                
-- stdout --
	* [bridge-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-962000" primary control-plane node in "bridge-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:15:59.108108    6119 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:15:59.108240    6119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:59.108244    6119 out.go:358] Setting ErrFile to fd 2...
	I0930 04:15:59.108246    6119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:15:59.108379    6119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:15:59.109480    6119 out.go:352] Setting JSON to false
	I0930 04:15:59.125880    6119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4522,"bootTime":1727690437,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:15:59.125944    6119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:15:59.133096    6119 out.go:177] * [bridge-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:15:59.142152    6119 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:15:59.142169    6119 notify.go:220] Checking for updates...
	I0930 04:15:59.151138    6119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:15:59.154157    6119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:15:59.157127    6119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:15:59.160109    6119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:15:59.163246    6119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:15:59.166462    6119 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:15:59.166527    6119 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:15:59.166582    6119 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:15:59.171051    6119 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:15:59.178225    6119 start.go:297] selected driver: qemu2
	I0930 04:15:59.178231    6119 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:15:59.178237    6119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:15:59.180430    6119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:15:59.185076    6119 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:15:59.188170    6119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:15:59.188185    6119 cni.go:84] Creating CNI manager for "bridge"
	I0930 04:15:59.188193    6119 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:15:59.188221    6119 start.go:340] cluster config:
	{Name:bridge-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:15:59.191814    6119 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:15:59.197111    6119 out.go:177] * Starting "bridge-962000" primary control-plane node in "bridge-962000" cluster
	I0930 04:15:59.201086    6119 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:15:59.201132    6119 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:15:59.201145    6119 cache.go:56] Caching tarball of preloaded images
	I0930 04:15:59.201215    6119 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:15:59.201221    6119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:15:59.201283    6119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/bridge-962000/config.json ...
	I0930 04:15:59.201297    6119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/bridge-962000/config.json: {Name:mke3eb8a885ced0e89ede1673194aaaf3601dd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:15:59.201505    6119 start.go:360] acquireMachinesLock for bridge-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:15:59.201536    6119 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "bridge-962000"
	I0930 04:15:59.201548    6119 start.go:93] Provisioning new machine with config: &{Name:bridge-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:15:59.201574    6119 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:15:59.210083    6119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:15:59.226460    6119 start.go:159] libmachine.API.Create for "bridge-962000" (driver="qemu2")
	I0930 04:15:59.226491    6119 client.go:168] LocalClient.Create starting
	I0930 04:15:59.226560    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:15:59.226590    6119 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:59.226599    6119 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:59.226652    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:15:59.226676    6119 main.go:141] libmachine: Decoding PEM data...
	I0930 04:15:59.226685    6119 main.go:141] libmachine: Parsing certificate...
	I0930 04:15:59.227087    6119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:15:59.387326    6119 main.go:141] libmachine: Creating SSH key...
	I0930 04:15:59.472970    6119 main.go:141] libmachine: Creating Disk image...
	I0930 04:15:59.472978    6119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:15:59.473196    6119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:15:59.482449    6119 main.go:141] libmachine: STDOUT: 
	I0930 04:15:59.482472    6119 main.go:141] libmachine: STDERR: 
	I0930 04:15:59.482538    6119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2 +20000M
	I0930 04:15:59.490501    6119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:15:59.490520    6119 main.go:141] libmachine: STDERR: 
	I0930 04:15:59.490553    6119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:15:59.490560    6119 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:15:59.490569    6119 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:15:59.490598    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:38:0d:72:78:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:15:59.492263    6119 main.go:141] libmachine: STDOUT: 
	I0930 04:15:59.492284    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:15:59.492302    6119 client.go:171] duration metric: took 265.808542ms to LocalClient.Create
	I0930 04:16:01.494358    6119 start.go:128] duration metric: took 2.292811625s to createHost
	I0930 04:16:01.494404    6119 start.go:83] releasing machines lock for "bridge-962000", held for 2.292902917s
	W0930 04:16:01.494440    6119 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:01.504081    6119 out.go:177] * Deleting "bridge-962000" in qemu2 ...
	W0930 04:16:01.531342    6119 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:01.531353    6119 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:06.533414    6119 start.go:360] acquireMachinesLock for bridge-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:06.533673    6119 start.go:364] duration metric: took 202.208µs to acquireMachinesLock for "bridge-962000"
	I0930 04:16:06.533703    6119 start.go:93] Provisioning new machine with config: &{Name:bridge-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:06.533821    6119 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:06.545177    6119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:16:06.574948    6119 start.go:159] libmachine.API.Create for "bridge-962000" (driver="qemu2")
	I0930 04:16:06.575003    6119 client.go:168] LocalClient.Create starting
	I0930 04:16:06.575100    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:06.575157    6119 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:06.575170    6119 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:06.575221    6119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:06.575254    6119 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:06.575266    6119 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:06.575758    6119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:06.739815    6119 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:06.878710    6119 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:06.878720    6119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:06.878971    6119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:16:06.888555    6119 main.go:141] libmachine: STDOUT: 
	I0930 04:16:06.888572    6119 main.go:141] libmachine: STDERR: 
	I0930 04:16:06.888636    6119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2 +20000M
	I0930 04:16:06.896614    6119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:06.896629    6119 main.go:141] libmachine: STDERR: 
	I0930 04:16:06.896641    6119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:16:06.896647    6119 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:06.896659    6119 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:06.896691    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:a6:e3:5c:39:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/bridge-962000/disk.qcow2
	I0930 04:16:06.898428    6119 main.go:141] libmachine: STDOUT: 
	I0930 04:16:06.898441    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:06.898452    6119 client.go:171] duration metric: took 323.44975ms to LocalClient.Create
	I0930 04:16:08.900653    6119 start.go:128] duration metric: took 2.36683775s to createHost
	I0930 04:16:08.900769    6119 start.go:83] releasing machines lock for "bridge-962000", held for 2.367122625s
	W0930 04:16:08.901216    6119 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:08.920061    6119 out.go:201] 
	W0930 04:16:08.923844    6119 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:08.923861    6119 out.go:270] * 
	* 
	W0930 04:16:08.925221    6119 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:08.941975    6119 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-962000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.940119416s)

                                                
                                                
-- stdout --
	* [kubenet-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-962000" primary control-plane node in "kubenet-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:11.144713    6233 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:11.144837    6233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:11.144840    6233 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:11.144842    6233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:11.144981    6233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:11.146014    6233 out.go:352] Setting JSON to false
	I0930 04:16:11.162433    6233 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4534,"bootTime":1727690437,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:11.162502    6233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:11.170560    6233 out.go:177] * [kubenet-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:11.179375    6233 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:11.179413    6233 notify.go:220] Checking for updates...
	I0930 04:16:11.187335    6233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:11.190341    6233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:11.193333    6233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:11.196344    6233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:11.199292    6233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:11.202741    6233 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:11.202803    6233 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:16:11.202860    6233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:11.206234    6233 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:16:11.213329    6233 start.go:297] selected driver: qemu2
	I0930 04:16:11.213335    6233 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:16:11.213342    6233 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:11.215745    6233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:16:11.217421    6233 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:16:11.220354    6233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:11.220370    6233 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0930 04:16:11.220395    6233 start.go:340] cluster config:
	{Name:kubenet-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:11.224188    6233 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:11.231300    6233 out.go:177] * Starting "kubenet-962000" primary control-plane node in "kubenet-962000" cluster
	I0930 04:16:11.235318    6233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:11.235335    6233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:16:11.235343    6233 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:11.235398    6233 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:11.235403    6233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:16:11.235468    6233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kubenet-962000/config.json ...
	I0930 04:16:11.235478    6233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/kubenet-962000/config.json: {Name:mk396a1dceced33f1b34b6fbd1d697467e385115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:16:11.235697    6233 start.go:360] acquireMachinesLock for kubenet-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:11.235729    6233 start.go:364] duration metric: took 26.834µs to acquireMachinesLock for "kubenet-962000"
	I0930 04:16:11.235741    6233 start.go:93] Provisioning new machine with config: &{Name:kubenet-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:11.235778    6233 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:11.244298    6233 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:16:11.259782    6233 start.go:159] libmachine.API.Create for "kubenet-962000" (driver="qemu2")
	I0930 04:16:11.259809    6233 client.go:168] LocalClient.Create starting
	I0930 04:16:11.259879    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:11.259911    6233 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:11.259920    6233 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:11.259970    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:11.259993    6233 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:11.260002    6233 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:11.260344    6233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:11.425757    6233 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:11.695428    6233 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:11.695439    6233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:11.695653    6233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:11.705200    6233 main.go:141] libmachine: STDOUT: 
	I0930 04:16:11.705226    6233 main.go:141] libmachine: STDERR: 
	I0930 04:16:11.705294    6233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2 +20000M
	I0930 04:16:11.713386    6233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:11.713409    6233 main.go:141] libmachine: STDERR: 
	I0930 04:16:11.713428    6233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:11.713434    6233 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:11.713445    6233 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:11.713473    6233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:58:53:94:82:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:11.715129    6233 main.go:141] libmachine: STDOUT: 
	I0930 04:16:11.715143    6233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:11.715172    6233 client.go:171] duration metric: took 455.365417ms to LocalClient.Create
	I0930 04:16:13.716236    6233 start.go:128] duration metric: took 2.480468833s to createHost
	I0930 04:16:13.716305    6233 start.go:83] releasing machines lock for "kubenet-962000", held for 2.480611542s
	W0930 04:16:13.716353    6233 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:13.725993    6233 out.go:177] * Deleting "kubenet-962000" in qemu2 ...
	W0930 04:16:13.758272    6233 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:13.758299    6233 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:18.760431    6233 start.go:360] acquireMachinesLock for kubenet-962000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:18.760753    6233 start.go:364] duration metric: took 261.75µs to acquireMachinesLock for "kubenet-962000"
	I0930 04:16:18.760822    6233 start.go:93] Provisioning new machine with config: &{Name:kubenet-962000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:18.761507    6233 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:18.767029    6233 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0930 04:16:18.795386    6233 start.go:159] libmachine.API.Create for "kubenet-962000" (driver="qemu2")
	I0930 04:16:18.795420    6233 client.go:168] LocalClient.Create starting
	I0930 04:16:18.795528    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:18.795570    6233 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:18.795588    6233 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:18.795637    6233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:18.795668    6233 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:18.795678    6233 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:18.796043    6233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:18.956178    6233 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:19.001276    6233 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:19.001285    6233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:19.001493    6233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:19.010768    6233 main.go:141] libmachine: STDOUT: 
	I0930 04:16:19.010789    6233 main.go:141] libmachine: STDERR: 
	I0930 04:16:19.010854    6233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2 +20000M
	I0930 04:16:19.018856    6233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:19.018870    6233 main.go:141] libmachine: STDERR: 
	I0930 04:16:19.018892    6233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:19.018899    6233 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:19.018907    6233 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:19.018950    6233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:80:95:ba:00:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/kubenet-962000/disk.qcow2
	I0930 04:16:19.020675    6233 main.go:141] libmachine: STDOUT: 
	I0930 04:16:19.020688    6233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:19.020702    6233 client.go:171] duration metric: took 225.279292ms to LocalClient.Create
	I0930 04:16:21.022759    6233 start.go:128] duration metric: took 2.26127575s to createHost
	I0930 04:16:21.022787    6233 start.go:83] releasing machines lock for "kubenet-962000", held for 2.262059708s
	W0930 04:16:21.022947    6233 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:21.031890    6233 out.go:201] 
	W0930 04:16:21.034915    6233 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:21.034926    6233 out.go:270] * 
	* 
	W0930 04:16:21.036025    6233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:21.045880    6233 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.841811917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-153000" primary control-plane node in "old-k8s-version-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:23.220312    6349 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:23.220447    6349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:23.220451    6349 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:23.220453    6349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:23.220594    6349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:23.221666    6349 out.go:352] Setting JSON to false
	I0930 04:16:23.238064    6349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4546,"bootTime":1727690437,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:23.238141    6349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:23.246354    6349 out.go:177] * [old-k8s-version-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:23.255181    6349 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:23.255246    6349 notify.go:220] Checking for updates...
	I0930 04:16:23.263180    6349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:23.266209    6349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:23.270146    6349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:23.273168    6349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:23.276224    6349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:23.279485    6349 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:23.279562    6349 config.go:182] Loaded profile config "stopped-upgrade-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0930 04:16:23.279611    6349 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:23.284175    6349 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:16:23.291059    6349 start.go:297] selected driver: qemu2
	I0930 04:16:23.291064    6349 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:16:23.291077    6349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:23.293104    6349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:16:23.297156    6349 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:16:23.300247    6349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:23.300263    6349 cni.go:84] Creating CNI manager for ""
	I0930 04:16:23.300293    6349 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 04:16:23.300328    6349 start.go:340] cluster config:
	{Name:old-k8s-version-153000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:23.303975    6349 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:23.312173    6349 out.go:177] * Starting "old-k8s-version-153000" primary control-plane node in "old-k8s-version-153000" cluster
	I0930 04:16:23.315157    6349 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 04:16:23.315174    6349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 04:16:23.315185    6349 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:23.315242    6349 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:23.315248    6349 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0930 04:16:23.315313    6349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/old-k8s-version-153000/config.json ...
	I0930 04:16:23.315324    6349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/old-k8s-version-153000/config.json: {Name:mk73b73c6bf3cc57885816a55e058adeae2bf4df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:16:23.315537    6349 start.go:360] acquireMachinesLock for old-k8s-version-153000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:23.315570    6349 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "old-k8s-version-153000"
	I0930 04:16:23.315581    6349 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:23.315618    6349 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:23.323043    6349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:23.339703    6349 start.go:159] libmachine.API.Create for "old-k8s-version-153000" (driver="qemu2")
	I0930 04:16:23.339731    6349 client.go:168] LocalClient.Create starting
	I0930 04:16:23.339804    6349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:23.339833    6349 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:23.339848    6349 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:23.339887    6349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:23.339910    6349 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:23.339917    6349 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:23.340273    6349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:23.499307    6349 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:23.584345    6349 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:23.584354    6349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:23.584595    6349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:23.593679    6349 main.go:141] libmachine: STDOUT: 
	I0930 04:16:23.593696    6349 main.go:141] libmachine: STDERR: 
	I0930 04:16:23.593755    6349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2 +20000M
	I0930 04:16:23.601842    6349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:23.601862    6349 main.go:141] libmachine: STDERR: 
	I0930 04:16:23.601877    6349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:23.601882    6349 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:23.601891    6349 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:23.601920    6349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8c:d9:8a:59:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:23.603572    6349 main.go:141] libmachine: STDOUT: 
	I0930 04:16:23.603588    6349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:23.603608    6349 client.go:171] duration metric: took 263.876583ms to LocalClient.Create
	I0930 04:16:25.605906    6349 start.go:128] duration metric: took 2.290281084s to createHost
	I0930 04:16:25.606000    6349 start.go:83] releasing machines lock for "old-k8s-version-153000", held for 2.290459458s
	W0930 04:16:25.606087    6349 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:25.620276    6349 out.go:177] * Deleting "old-k8s-version-153000" in qemu2 ...
	W0930 04:16:25.655873    6349 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:25.655896    6349 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:30.658072    6349 start.go:360] acquireMachinesLock for old-k8s-version-153000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:30.658610    6349 start.go:364] duration metric: took 446.916µs to acquireMachinesLock for "old-k8s-version-153000"
	I0930 04:16:30.658797    6349 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:30.659058    6349 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:30.666915    6349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:30.719762    6349 start.go:159] libmachine.API.Create for "old-k8s-version-153000" (driver="qemu2")
	I0930 04:16:30.719813    6349 client.go:168] LocalClient.Create starting
	I0930 04:16:30.719940    6349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:30.720008    6349 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:30.720029    6349 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:30.720101    6349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:30.720148    6349 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:30.720160    6349 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:30.720962    6349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:30.889753    6349 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:30.961808    6349 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:30.961814    6349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:30.962047    6349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:30.971489    6349 main.go:141] libmachine: STDOUT: 
	I0930 04:16:30.971514    6349 main.go:141] libmachine: STDERR: 
	I0930 04:16:30.971590    6349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2 +20000M
	I0930 04:16:30.979681    6349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:30.979699    6349 main.go:141] libmachine: STDERR: 
	I0930 04:16:30.979716    6349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:30.979721    6349 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:30.979729    6349 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:30.979767    6349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:be:fb:d6:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:30.981482    6349 main.go:141] libmachine: STDOUT: 
	I0930 04:16:30.981498    6349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:30.981510    6349 client.go:171] duration metric: took 261.694791ms to LocalClient.Create
	I0930 04:16:32.983589    6349 start.go:128] duration metric: took 2.324523958s to createHost
	I0930 04:16:32.983621    6349 start.go:83] releasing machines lock for "old-k8s-version-153000", held for 2.325029583s
	W0930 04:16:32.983861    6349 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:32.998341    6349 out.go:201] 
	W0930 04:16:33.002388    6349 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:33.002402    6349 out.go:270] * 
	* 
	W0930 04:16:33.003669    6349 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:33.020636    6349 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (55.382958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-153000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-153000 create -f testdata/busybox.yaml: exit status 1 (29.077583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-153000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-153000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (30.381625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (29.789334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-153000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-153000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-153000 describe deploy/metrics-server -n kube-system: exit status 1 (26.992709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-153000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-153000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (29.770083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.166070458s)

                                                
                                                
-- stdout --
	* [no-preload-616000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-616000" primary control-plane node in "no-preload-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:36.419579    6394 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:36.419723    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:36.419726    6394 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:36.419729    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:36.419861    6394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:36.420900    6394 out.go:352] Setting JSON to false
	I0930 04:16:36.437186    6394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4559,"bootTime":1727690437,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:36.437253    6394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:36.442637    6394 out.go:177] * [no-preload-616000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:36.450573    6394 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:36.450616    6394 notify.go:220] Checking for updates...
	I0930 04:16:36.459857    6394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:36.463575    6394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:36.466615    6394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:36.469627    6394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:36.472554    6394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:36.475925    6394 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:36.476000    6394 config.go:182] Loaded profile config "old-k8s-version-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0930 04:16:36.476054    6394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:36.480598    6394 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:16:36.487563    6394 start.go:297] selected driver: qemu2
	I0930 04:16:36.487569    6394 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:16:36.487577    6394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:36.489769    6394 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:16:36.492567    6394 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:16:36.495643    6394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:36.495677    6394 cni.go:84] Creating CNI manager for ""
	I0930 04:16:36.495701    6394 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:16:36.495706    6394 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:16:36.495735    6394 start.go:340] cluster config:
	{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:36.499534    6394 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.507398    6394 out.go:177] * Starting "no-preload-616000" primary control-plane node in "no-preload-616000" cluster
	I0930 04:16:36.511533    6394 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:36.511631    6394 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/no-preload-616000/config.json ...
	I0930 04:16:36.511653    6394 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/no-preload-616000/config.json: {Name:mk5fa0558ca46469c77424876e993a85c6d4a78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:16:36.511658    6394 cache.go:107] acquiring lock: {Name:mk40bb24f276da084af3362fead279a169db3542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511711    6394 cache.go:107] acquiring lock: {Name:mk7d310fe1c75cb1be4aa837520328a1ebcf6887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511722    6394 cache.go:107] acquiring lock: {Name:mk9ff08a5a9476f591bef8fce37c02edb6d066ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511745    6394 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0930 04:16:36.511752    6394 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.041µs
	I0930 04:16:36.511759    6394 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0930 04:16:36.511766    6394 cache.go:107] acquiring lock: {Name:mk4d5ddcc2bdae7f940fe4d8dae725b1ee59ac37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511664    6394 cache.go:107] acquiring lock: {Name:mk7d6e85ba87fd69272641b86402ec0a54f2a69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511883    6394 cache.go:107] acquiring lock: {Name:mk3fc4be5d86e5e9e739030d2386c3ecd2420805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511885    6394 cache.go:107] acquiring lock: {Name:mk040147238bd52eda36ca62253896470a15f110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511905    6394 cache.go:107] acquiring lock: {Name:mk50ba610827c04633e4a8bea026971417f61598 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.511920    6394 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0930 04:16:36.512038    6394 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 04:16:36.512061    6394 start.go:360] acquireMachinesLock for no-preload-616000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:36.512081    6394 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 04:16:36.512117    6394 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 04:16:36.512152    6394 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 04:16:36.512226    6394 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 04:16:36.512242    6394 start.go:364] duration metric: took 174.083µs to acquireMachinesLock for "no-preload-616000"
	I0930 04:16:36.512228    6394 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 04:16:36.512255    6394 start.go:93] Provisioning new machine with config: &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:36.512296    6394 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:36.520506    6394 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:36.523926    6394 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0930 04:16:36.524707    6394 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0930 04:16:36.527243    6394 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0930 04:16:36.527270    6394 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 04:16:36.527298    6394 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0930 04:16:36.527302    6394 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0930 04:16:36.527305    6394 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0930 04:16:36.540099    6394 start.go:159] libmachine.API.Create for "no-preload-616000" (driver="qemu2")
	I0930 04:16:36.540128    6394 client.go:168] LocalClient.Create starting
	I0930 04:16:36.540205    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:36.540234    6394 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:36.540247    6394 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:36.540285    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:36.540310    6394 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:36.540319    6394 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:36.540663    6394 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:36.781133    6394 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:36.892056    6394 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:36.892074    6394 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:36.892299    6394 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:36.911460    6394 main.go:141] libmachine: STDOUT: 
	I0930 04:16:36.911477    6394 main.go:141] libmachine: STDERR: 
	I0930 04:16:36.911538    6394 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2 +20000M
	I0930 04:16:36.924075    6394 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:36.924090    6394 main.go:141] libmachine: STDERR: 
	I0930 04:16:36.924108    6394 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:36.924113    6394 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:36.924125    6394 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:36.924149    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:13:ba:ff:bb:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:36.925804    6394 main.go:141] libmachine: STDOUT: 
	I0930 04:16:36.925827    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:36.925843    6394 client.go:171] duration metric: took 385.71475ms to LocalClient.Create
	I0930 04:16:38.421553    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0930 04:16:38.584491    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0930 04:16:38.584509    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0930 04:16:38.584548    6394 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 2.0728715s
	I0930 04:16:38.584578    6394 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0930 04:16:38.590185    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0930 04:16:38.590688    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0930 04:16:38.926231    6394 start.go:128] duration metric: took 2.413955084s to createHost
	I0930 04:16:38.926271    6394 start.go:83] releasing machines lock for "no-preload-616000", held for 2.414061083s
	W0930 04:16:38.926341    6394 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:38.940708    6394 out.go:177] * Deleting "no-preload-616000" in qemu2 ...
	W0930 04:16:38.981440    6394 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:38.981462    6394 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:39.073481    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0930 04:16:39.103723    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0930 04:16:39.120075    6394 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0930 04:16:41.719316    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0930 04:16:41.719364    6394 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 5.207631916s
	I0930 04:16:41.719413    6394 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0930 04:16:42.565764    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0930 04:16:42.565811    6394 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 6.054082667s
	I0930 04:16:42.565835    6394 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0930 04:16:42.789910    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0930 04:16:42.789954    6394 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 6.278389959s
	I0930 04:16:42.789979    6394 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0930 04:16:43.646985    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0930 04:16:43.647033    6394 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 7.135497583s
	I0930 04:16:43.647059    6394 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0930 04:16:43.916829    6394 cache.go:157] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0930 04:16:43.916865    6394 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 7.405117833s
	I0930 04:16:43.916887    6394 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0930 04:16:43.981580    6394 start.go:360] acquireMachinesLock for no-preload-616000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:43.986537    6394 start.go:364] duration metric: took 4.899625ms to acquireMachinesLock for "no-preload-616000"
	I0930 04:16:43.986589    6394 start.go:93] Provisioning new machine with config: &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:43.986765    6394 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:43.994925    6394 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:44.042203    6394 start.go:159] libmachine.API.Create for "no-preload-616000" (driver="qemu2")
	I0930 04:16:44.042255    6394 client.go:168] LocalClient.Create starting
	I0930 04:16:44.042409    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:44.042480    6394 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:44.042499    6394 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:44.042573    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:44.042617    6394 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:44.042637    6394 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:44.043138    6394 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:44.255147    6394 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:44.482954    6394 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:44.482965    6394 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:44.485174    6394 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:44.495339    6394 main.go:141] libmachine: STDOUT: 
	I0930 04:16:44.495357    6394 main.go:141] libmachine: STDERR: 
	I0930 04:16:44.495436    6394 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2 +20000M
	I0930 04:16:44.504401    6394 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:44.504420    6394 main.go:141] libmachine: STDERR: 
	I0930 04:16:44.504436    6394 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:44.504441    6394 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:44.504452    6394 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:44.504499    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:4d:21:36:ff:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:44.506528    6394 main.go:141] libmachine: STDOUT: 
	I0930 04:16:44.506544    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:44.506558    6394 client.go:171] duration metric: took 464.270834ms to LocalClient.Create
	I0930 04:16:46.507713    6394 start.go:128] duration metric: took 2.520951542s to createHost
	I0930 04:16:46.507775    6394 start.go:83] releasing machines lock for "no-preload-616000", held for 2.521256583s
	W0930 04:16:46.507944    6394 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:46.525407    6394 out.go:201] 
	W0930 04:16:46.529429    6394 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:46.529474    6394 out.go:270] * 
	* 
	W0930 04:16:46.532229    6394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:46.546329    6394 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (49.755084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (7.200618041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-153000" primary control-plane node in "old-k8s-version-153000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:36.851574    6434 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:36.851704    6434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:36.851708    6434 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:36.851710    6434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:36.851855    6434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:36.853140    6434 out.go:352] Setting JSON to false
	I0930 04:16:36.871222    6434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4559,"bootTime":1727690437,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:36.871297    6434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:36.875378    6434 out.go:177] * [old-k8s-version-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:36.883540    6434 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:36.883620    6434 notify.go:220] Checking for updates...
	I0930 04:16:36.891543    6434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:36.899547    6434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:36.902557    6434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:36.910616    6434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:36.917563    6434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:36.923773    6434 config.go:182] Loaded profile config "old-k8s-version-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0930 04:16:36.927460    6434 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 04:16:36.930532    6434 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:36.934569    6434 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:16:36.941533    6434 start.go:297] selected driver: qemu2
	I0930 04:16:36.941540    6434 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:36.941605    6434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:36.943716    6434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:36.943741    6434 cni.go:84] Creating CNI manager for ""
	I0930 04:16:36.943764    6434 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 04:16:36.943787    6434 start.go:340] cluster config:
	{Name:old-k8s-version-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-153000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:36.947135    6434 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:36.954568    6434 out.go:177] * Starting "old-k8s-version-153000" primary control-plane node in "old-k8s-version-153000" cluster
	I0930 04:16:36.958614    6434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 04:16:36.958636    6434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 04:16:36.958648    6434 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:36.958733    6434 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:36.958738    6434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0930 04:16:36.958793    6434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/old-k8s-version-153000/config.json ...
	I0930 04:16:36.959288    6434 start.go:360] acquireMachinesLock for old-k8s-version-153000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:38.926412    6434 start.go:364] duration metric: took 1.967106334s to acquireMachinesLock for "old-k8s-version-153000"
	I0930 04:16:38.926502    6434 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:16:38.926531    6434 fix.go:54] fixHost starting: 
	I0930 04:16:38.927143    6434 fix.go:112] recreateIfNeeded on old-k8s-version-153000: state=Stopped err=<nil>
	W0930 04:16:38.927185    6434 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:16:38.933671    6434 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-153000" ...
	I0930 04:16:38.945570    6434 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:38.945815    6434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:be:fb:d6:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:38.957744    6434 main.go:141] libmachine: STDOUT: 
	I0930 04:16:38.957820    6434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:38.957948    6434 fix.go:56] duration metric: took 31.417042ms for fixHost
	I0930 04:16:38.957971    6434 start.go:83] releasing machines lock for "old-k8s-version-153000", held for 31.5045ms
	W0930 04:16:38.958009    6434 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:38.958174    6434 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:38.958191    6434 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:43.958420    6434 start.go:360] acquireMachinesLock for old-k8s-version-153000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:43.958846    6434 start.go:364] duration metric: took 327.417µs to acquireMachinesLock for "old-k8s-version-153000"
	I0930 04:16:43.958923    6434 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:16:43.958942    6434 fix.go:54] fixHost starting: 
	I0930 04:16:43.959693    6434 fix.go:112] recreateIfNeeded on old-k8s-version-153000: state=Stopped err=<nil>
	W0930 04:16:43.959719    6434 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:16:43.967102    6434 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-153000" ...
	I0930 04:16:43.976037    6434 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:43.976399    6434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:be:fb:d6:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/old-k8s-version-153000/disk.qcow2
	I0930 04:16:43.986321    6434 main.go:141] libmachine: STDOUT: 
	I0930 04:16:43.986380    6434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:43.986466    6434 fix.go:56] duration metric: took 27.525042ms for fixHost
	I0930 04:16:43.986483    6434 start.go:83] releasing machines lock for "old-k8s-version-153000", held for 27.613917ms
	W0930 04:16:43.986666    6434 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:43.997988    6434 out.go:201] 
	W0930 04:16:44.003097    6434 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:44.003142    6434 out.go:270] * 
	* 
	W0930 04:16:44.005080    6434 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:44.012994    6434 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-153000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (49.67925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-153000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (34.22025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-153000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-153000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-153000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.095583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-153000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-153000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (33.817417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-153000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (31.72ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-153000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-153000 --alsologtostderr -v=1: exit status 83 (57.631916ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-153000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-153000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:44.300631    6464 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:44.300982    6464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:44.300986    6464 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:44.300988    6464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:44.301120    6464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:44.301305    6464 out.go:352] Setting JSON to false
	I0930 04:16:44.301314    6464 mustload.go:65] Loading cluster: old-k8s-version-153000
	I0930 04:16:44.301523    6464 config.go:182] Loaded profile config "old-k8s-version-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0930 04:16:44.306980    6464 out.go:177] * The control-plane node old-k8s-version-153000 host is not running: state=Stopped
	I0930 04:16:44.323989    6464 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-153000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-153000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (30.12075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (29.36125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.720111958s)

                                                
                                                
-- stdout --
	* [embed-certs-846000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:44.646934    6484 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:44.647096    6484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:44.647099    6484 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:44.647101    6484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:44.647233    6484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:44.648318    6484 out.go:352] Setting JSON to false
	I0930 04:16:44.664374    6484 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4567,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:44.664445    6484 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:44.669116    6484 out.go:177] * [embed-certs-846000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:44.678024    6484 notify.go:220] Checking for updates...
	I0930 04:16:44.682940    6484 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:44.690969    6484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:44.698887    6484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:44.706858    6484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:44.710928    6484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:44.717895    6484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:44.722259    6484 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:44.722326    6484 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:44.722375    6484 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:44.725947    6484 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:16:44.732961    6484 start.go:297] selected driver: qemu2
	I0930 04:16:44.732967    6484 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:16:44.732972    6484 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:44.735286    6484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:16:44.738734    6484 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:16:44.743045    6484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:44.743071    6484 cni.go:84] Creating CNI manager for ""
	I0930 04:16:44.743103    6484 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:16:44.743122    6484 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:16:44.743154    6484 start.go:340] cluster config:
	{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:44.746977    6484 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:44.750978    6484 out.go:177] * Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	I0930 04:16:44.758939    6484 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:44.758952    6484 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:16:44.758959    6484 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:44.759014    6484 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:44.759020    6484 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:16:44.759076    6484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/embed-certs-846000/config.json ...
	I0930 04:16:44.759086    6484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/embed-certs-846000/config.json: {Name:mkacde7794838f0859c0e7b649f624ec3eccfc7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:16:44.759315    6484 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:46.508003    6484 start.go:364] duration metric: took 1.748567666s to acquireMachinesLock for "embed-certs-846000"
	I0930 04:16:46.508124    6484 start.go:93] Provisioning new machine with config: &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:46.508348    6484 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:46.521288    6484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:46.574515    6484 start.go:159] libmachine.API.Create for "embed-certs-846000" (driver="qemu2")
	I0930 04:16:46.574569    6484 client.go:168] LocalClient.Create starting
	I0930 04:16:46.574737    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:46.574800    6484 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:46.574819    6484 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:46.574889    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:46.574940    6484 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:46.574960    6484 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:46.575565    6484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:46.795282    6484 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:46.865558    6484 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:46.865568    6484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:46.865795    6484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:46.876131    6484 main.go:141] libmachine: STDOUT: 
	I0930 04:16:46.876165    6484 main.go:141] libmachine: STDERR: 
	I0930 04:16:46.876236    6484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2 +20000M
	I0930 04:16:46.892929    6484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:46.892954    6484 main.go:141] libmachine: STDERR: 
	I0930 04:16:46.892971    6484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:46.892975    6484 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:46.892990    6484 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:46.893018    6484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c7:bb:e9:99:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:46.894862    6484 main.go:141] libmachine: STDOUT: 
	I0930 04:16:46.894877    6484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:46.894903    6484 client.go:171] duration metric: took 320.332417ms to LocalClient.Create
	I0930 04:16:48.897066    6484 start.go:128] duration metric: took 2.38872175s to createHost
	I0930 04:16:48.897129    6484 start.go:83] releasing machines lock for "embed-certs-846000", held for 2.389121375s
	W0930 04:16:48.897187    6484 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:48.906475    6484 out.go:177] * Deleting "embed-certs-846000" in qemu2 ...
	W0930 04:16:48.955756    6484 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:48.955787    6484 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:53.956175    6484 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:53.956647    6484 start.go:364] duration metric: took 391.708µs to acquireMachinesLock for "embed-certs-846000"
	I0930 04:16:53.956785    6484 start.go:93] Provisioning new machine with config: &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:53.957014    6484 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:53.968639    6484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:54.018111    6484 start.go:159] libmachine.API.Create for "embed-certs-846000" (driver="qemu2")
	I0930 04:16:54.018173    6484 client.go:168] LocalClient.Create starting
	I0930 04:16:54.018321    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:54.018390    6484 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:54.018406    6484 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:54.018462    6484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:54.018507    6484 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:54.018525    6484 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:54.019042    6484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:54.199805    6484 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:54.251324    6484 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:54.251329    6484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:54.251522    6484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:54.260947    6484 main.go:141] libmachine: STDOUT: 
	I0930 04:16:54.260965    6484 main.go:141] libmachine: STDERR: 
	I0930 04:16:54.261016    6484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2 +20000M
	I0930 04:16:54.269081    6484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:54.269103    6484 main.go:141] libmachine: STDERR: 
	I0930 04:16:54.269116    6484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:54.269120    6484 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:54.269128    6484 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:54.269163    6484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8c:97:7c:d8:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:54.270886    6484 main.go:141] libmachine: STDOUT: 
	I0930 04:16:54.270900    6484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:54.270910    6484 client.go:171] duration metric: took 252.736917ms to LocalClient.Create
	I0930 04:16:56.271444    6484 start.go:128] duration metric: took 2.314407625s to createHost
	I0930 04:16:56.271548    6484 start.go:83] releasing machines lock for "embed-certs-846000", held for 2.314913541s
	W0930 04:16:56.271864    6484 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:56.281530    6484 out.go:201] 
	W0930 04:16:56.310533    6484 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:56.310571    6484 out.go:270] * 
	* 
	W0930 04:16:56.313106    6484 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:56.322491    6484 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (63.124958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-616000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-616000 create -f testdata/busybox.yaml: exit status 1 (30.618125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-616000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (33.414458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (32.577583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-616000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system: exit status 1 (28.261875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (31.301792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.267500125s)

                                                
                                                
-- stdout --
	* [no-preload-616000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-616000" primary control-plane node in "no-preload-616000" cluster
	* Restarting existing qemu2 VM for "no-preload-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:49.132345    6522 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:49.132477    6522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:49.132480    6522 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:49.132482    6522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:49.132632    6522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:49.133676    6522 out.go:352] Setting JSON to false
	I0930 04:16:49.150062    6522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4572,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:49.150126    6522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:49.155529    6522 out.go:177] * [no-preload-616000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:49.162561    6522 notify.go:220] Checking for updates...
	I0930 04:16:49.166480    6522 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:49.170482    6522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:49.174421    6522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:49.177508    6522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:49.181454    6522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:49.184452    6522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:49.187825    6522 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:49.188088    6522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:49.191424    6522 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:16:49.198419    6522 start.go:297] selected driver: qemu2
	I0930 04:16:49.198425    6522 start.go:901] validating driver "qemu2" against &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:49.198473    6522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:49.200699    6522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:49.200724    6522 cni.go:84] Creating CNI manager for ""
	I0930 04:16:49.200756    6522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:16:49.200779    6522 start.go:340] cluster config:
	{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-616000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:49.204282    6522 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.213490    6522 out.go:177] * Starting "no-preload-616000" primary control-plane node in "no-preload-616000" cluster
	I0930 04:16:49.217471    6522 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:49.217557    6522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/no-preload-616000/config.json ...
	I0930 04:16:49.217595    6522 cache.go:107] acquiring lock: {Name:mk40bb24f276da084af3362fead279a169db3542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217603    6522 cache.go:107] acquiring lock: {Name:mk7d6e85ba87fd69272641b86402ec0a54f2a69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217649    6522 cache.go:107] acquiring lock: {Name:mk9ff08a5a9476f591bef8fce37c02edb6d066ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217667    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0930 04:16:49.217676    6522 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.208µs
	I0930 04:16:49.217682    6522 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0930 04:16:49.217682    6522 cache.go:107] acquiring lock: {Name:mk3fc4be5d86e5e9e739030d2386c3ecd2420805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217689    6522 cache.go:107] acquiring lock: {Name:mk4d5ddcc2bdae7f940fe4d8dae725b1ee59ac37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217690    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0930 04:16:49.217714    6522 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 120.459µs
	I0930 04:16:49.217702    6522 cache.go:107] acquiring lock: {Name:mk50ba610827c04633e4a8bea026971417f61598 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217763    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0930 04:16:49.217760    6522 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0930 04:16:49.217767    6522 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 151.916µs
	I0930 04:16:49.217770    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0930 04:16:49.217771    6522 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0930 04:16:49.217774    6522 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 72.167µs
	I0930 04:16:49.217779    6522 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0930 04:16:49.217746    6522 cache.go:107] acquiring lock: {Name:mk7d310fe1c75cb1be4aa837520328a1ebcf6887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217845    6522 cache.go:107] acquiring lock: {Name:mk040147238bd52eda36ca62253896470a15f110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:49.217861    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0930 04:16:49.217865    6522 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 149.708µs
	I0930 04:16:49.217869    6522 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0930 04:16:49.217802    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0930 04:16:49.217873    6522 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 191.666µs
	I0930 04:16:49.217875    6522 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0930 04:16:49.217718    6522 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0930 04:16:49.217893    6522 cache.go:115] /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0930 04:16:49.217900    6522 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 152.916µs
	I0930 04:16:49.217907    6522 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0930 04:16:49.217963    6522 start.go:360] acquireMachinesLock for no-preload-616000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:49.217998    6522 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "no-preload-616000"
	I0930 04:16:49.218007    6522 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:16:49.218011    6522 fix.go:54] fixHost starting: 
	I0930 04:16:49.218135    6522 fix.go:112] recreateIfNeeded on no-preload-616000: state=Stopped err=<nil>
	W0930 04:16:49.218143    6522 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:16:49.226383    6522 out.go:177] * Restarting existing qemu2 VM for "no-preload-616000" ...
	I0930 04:16:49.230465    6522 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:49.230503    6522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:4d:21:36:ff:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:49.231212    6522 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0930 04:16:49.232826    6522 main.go:141] libmachine: STDOUT: 
	I0930 04:16:49.232870    6522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:49.232904    6522 fix.go:56] duration metric: took 14.890666ms for fixHost
	I0930 04:16:49.232909    6522 start.go:83] releasing machines lock for "no-preload-616000", held for 14.907208ms
	W0930 04:16:49.232918    6522 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:49.232958    6522 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:49.232962    6522 start.go:729] Will try again in 5 seconds ...
	I0930 04:16:51.074433    6522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0930 04:16:54.232965    6522 start.go:360] acquireMachinesLock for no-preload-616000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:56.271705    6522 start.go:364] duration metric: took 2.03868275s to acquireMachinesLock for "no-preload-616000"
	I0930 04:16:56.271898    6522 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:16:56.271916    6522 fix.go:54] fixHost starting: 
	I0930 04:16:56.272567    6522 fix.go:112] recreateIfNeeded on no-preload-616000: state=Stopped err=<nil>
	W0930 04:16:56.272594    6522 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:16:56.306488    6522 out.go:177] * Restarting existing qemu2 VM for "no-preload-616000" ...
	I0930 04:16:56.314489    6522 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:56.314653    6522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:4d:21:36:ff:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/no-preload-616000/disk.qcow2
	I0930 04:16:56.325567    6522 main.go:141] libmachine: STDOUT: 
	I0930 04:16:56.325637    6522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:56.325743    6522 fix.go:56] duration metric: took 53.827791ms for fixHost
	I0930 04:16:56.325772    6522 start.go:83] releasing machines lock for "no-preload-616000", held for 54.016542ms
	W0930 04:16:56.326060    6522 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:56.342787    6522 out.go:201] 
	W0930 04:16:56.346907    6522 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:56.346932    6522 out.go:270] * 
	* 
	W0930 04:16:56.348789    6522 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:16:56.360454    6522 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (45.749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-846000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-846000 create -f testdata/busybox.yaml: exit status 1 (30.557917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-846000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (31.546958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (35.149875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-616000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (34.060708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-616000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.908792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (32.006542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-846000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system: exit status 1 (28.377625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (31.751417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-616000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (32.692333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1: exit status 83 (43.003791ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-616000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-616000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:56.626420    6563 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:56.626601    6563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:56.626605    6563 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:56.626607    6563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:56.626730    6563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:56.626987    6563 out.go:352] Setting JSON to false
	I0930 04:16:56.626996    6563 mustload.go:65] Loading cluster: no-preload-616000
	I0930 04:16:56.627210    6563 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:56.631457    6563 out.go:177] * The control-plane node no-preload-616000 host is not running: state=Stopped
	I0930 04:16:56.635408    6563 out.go:177]   To start a cluster, run: "minikube start -p no-preload-616000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (29.009125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (29.063125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.065473375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-497000" primary control-plane node in "default-k8s-diff-port-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:57.059241    6594 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:57.059372    6594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:57.059376    6594 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:57.059379    6594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:57.059516    6594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:57.060618    6594 out.go:352] Setting JSON to false
	I0930 04:16:57.076636    6594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4580,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:57.076705    6594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:57.080554    6594 out.go:177] * [default-k8s-diff-port-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:57.088405    6594 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:57.088441    6594 notify.go:220] Checking for updates...
	I0930 04:16:57.096351    6594 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:57.099459    6594 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:57.110074    6594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:57.113457    6594 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:57.116462    6594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:57.119862    6594 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:57.119928    6594 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:57.119986    6594 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:57.123352    6594 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:16:57.130425    6594 start.go:297] selected driver: qemu2
	I0930 04:16:57.130430    6594 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:16:57.130435    6594 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:57.132628    6594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 04:16:57.134364    6594 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:16:57.137520    6594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:57.137542    6594 cni.go:84] Creating CNI manager for ""
	I0930 04:16:57.137565    6594 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:16:57.137573    6594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:16:57.137605    6594 start.go:340] cluster config:
	{Name:default-k8s-diff-port-497000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:57.141562    6594 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:57.149418    6594 out.go:177] * Starting "default-k8s-diff-port-497000" primary control-plane node in "default-k8s-diff-port-497000" cluster
	I0930 04:16:57.153403    6594 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:57.153419    6594 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:16:57.153428    6594 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:57.153500    6594 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:57.153508    6594 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:16:57.153583    6594 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/default-k8s-diff-port-497000/config.json ...
	I0930 04:16:57.153594    6594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/default-k8s-diff-port-497000/config.json: {Name:mkc6ea61837998c8e53348ad33f1b8842e0ceeed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:16:57.153835    6594 start.go:360] acquireMachinesLock for default-k8s-diff-port-497000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:57.153873    6594 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "default-k8s-diff-port-497000"
	I0930 04:16:57.153890    6594 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:16:57.153920    6594 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:16:57.161410    6594 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:16:57.180650    6594 start.go:159] libmachine.API.Create for "default-k8s-diff-port-497000" (driver="qemu2")
	I0930 04:16:57.180686    6594 client.go:168] LocalClient.Create starting
	I0930 04:16:57.180754    6594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:16:57.180791    6594 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:57.180809    6594 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:57.180851    6594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:16:57.180877    6594 main.go:141] libmachine: Decoding PEM data...
	I0930 04:16:57.180894    6594 main.go:141] libmachine: Parsing certificate...
	I0930 04:16:57.181264    6594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:16:57.342366    6594 main.go:141] libmachine: Creating SSH key...
	I0930 04:16:57.607761    6594 main.go:141] libmachine: Creating Disk image...
	I0930 04:16:57.607777    6594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:16:57.608083    6594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:16:57.617802    6594 main.go:141] libmachine: STDOUT: 
	I0930 04:16:57.617819    6594 main.go:141] libmachine: STDERR: 
	I0930 04:16:57.617894    6594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2 +20000M
	I0930 04:16:57.625784    6594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:16:57.625802    6594 main.go:141] libmachine: STDERR: 
	I0930 04:16:57.625814    6594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:16:57.625820    6594 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:16:57.625836    6594 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:57.625859    6594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:64:ee:5e:62:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:16:57.627476    6594 main.go:141] libmachine: STDOUT: 
	I0930 04:16:57.627491    6594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:57.627509    6594 client.go:171] duration metric: took 446.823958ms to LocalClient.Create
	I0930 04:16:59.628542    6594 start.go:128] duration metric: took 2.474628s to createHost
	I0930 04:16:59.628635    6594 start.go:83] releasing machines lock for "default-k8s-diff-port-497000", held for 2.474794584s
	W0930 04:16:59.628691    6594 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:59.647052    6594 out.go:177] * Deleting "default-k8s-diff-port-497000" in qemu2 ...
	W0930 04:16:59.682633    6594 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:59.682651    6594 start.go:729] Will try again in 5 seconds ...
	I0930 04:17:04.684790    6594 start.go:360] acquireMachinesLock for default-k8s-diff-port-497000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:04.686977    6594 start.go:364] duration metric: took 2.114583ms to acquireMachinesLock for "default-k8s-diff-port-497000"
	I0930 04:17:04.687073    6594 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:17:04.687295    6594 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:17:04.699642    6594 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:17:04.746194    6594 start.go:159] libmachine.API.Create for "default-k8s-diff-port-497000" (driver="qemu2")
	I0930 04:17:04.746239    6594 client.go:168] LocalClient.Create starting
	I0930 04:17:04.746349    6594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:17:04.746406    6594 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:04.746425    6594 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:04.746488    6594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:17:04.746533    6594 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:04.746546    6594 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:04.747070    6594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:17:04.917389    6594 main.go:141] libmachine: Creating SSH key...
	I0930 04:17:05.034568    6594 main.go:141] libmachine: Creating Disk image...
	I0930 04:17:05.034587    6594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:17:05.034787    6594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:17:05.044617    6594 main.go:141] libmachine: STDOUT: 
	I0930 04:17:05.044638    6594 main.go:141] libmachine: STDERR: 
	I0930 04:17:05.044721    6594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2 +20000M
	I0930 04:17:05.053698    6594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:17:05.053727    6594 main.go:141] libmachine: STDERR: 
	I0930 04:17:05.053746    6594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:17:05.053754    6594 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:17:05.053764    6594 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:05.053797    6594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f9:fb:61:47:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:17:05.055800    6594 main.go:141] libmachine: STDOUT: 
	I0930 04:17:05.055815    6594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:05.055828    6594 client.go:171] duration metric: took 309.589167ms to LocalClient.Create
	I0930 04:17:07.057132    6594 start.go:128] duration metric: took 2.369745209s to createHost
	I0930 04:17:07.057199    6594 start.go:83] releasing machines lock for "default-k8s-diff-port-497000", held for 2.370231875s
	W0930 04:17:07.057475    6594 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:07.070122    6594 out.go:201] 
	W0930 04:17:07.075149    6594 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:07.075178    6594 out.go:270] * 
	* 
	W0930 04:17:07.077681    6594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:17:07.087069    6594 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (50.753333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.8374645s)

                                                
                                                
-- stdout --
	* [embed-certs-846000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	* Restarting existing qemu2 VM for "embed-certs-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:16:58.916955    6614 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:16:58.917063    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:58.917066    6614 out.go:358] Setting ErrFile to fd 2...
	I0930 04:16:58.917068    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:16:58.917204    6614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:16:58.918249    6614 out.go:352] Setting JSON to false
	I0930 04:16:58.934300    6614 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4581,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:16:58.934370    6614 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:16:58.939537    6614 out.go:177] * [embed-certs-846000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:16:58.947519    6614 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:16:58.947559    6614 notify.go:220] Checking for updates...
	I0930 04:16:58.955482    6614 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:16:58.958471    6614 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:16:58.961493    6614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:16:58.964567    6614 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:16:58.967576    6614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:16:58.970859    6614 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:16:58.971157    6614 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:16:58.975516    6614 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:16:58.982536    6614 start.go:297] selected driver: qemu2
	I0930 04:16:58.982542    6614 start.go:901] validating driver "qemu2" against &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:58.982599    6614 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:16:58.984932    6614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:16:58.984959    6614 cni.go:84] Creating CNI manager for ""
	I0930 04:16:58.984979    6614 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:16:58.985002    6614 start.go:340] cluster config:
	{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-846000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:16:58.988650    6614 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:16:58.996489    6614 out.go:177] * Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	I0930 04:16:59.000564    6614 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:16:59.000582    6614 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:16:59.000601    6614 cache.go:56] Caching tarball of preloaded images
	I0930 04:16:59.000665    6614 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:16:59.000672    6614 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:16:59.000747    6614 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/embed-certs-846000/config.json ...
	I0930 04:16:59.001328    6614 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:16:59.628759    6614 start.go:364] duration metric: took 627.42025ms to acquireMachinesLock for "embed-certs-846000"
	I0930 04:16:59.628826    6614 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:16:59.628855    6614 fix.go:54] fixHost starting: 
	I0930 04:16:59.629453    6614 fix.go:112] recreateIfNeeded on embed-certs-846000: state=Stopped err=<nil>
	W0930 04:16:59.629497    6614 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:16:59.639024    6614 out.go:177] * Restarting existing qemu2 VM for "embed-certs-846000" ...
	I0930 04:16:59.650038    6614 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:16:59.650257    6614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8c:97:7c:d8:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:16:59.661165    6614 main.go:141] libmachine: STDOUT: 
	I0930 04:16:59.661237    6614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:16:59.661353    6614 fix.go:56] duration metric: took 32.502167ms for fixHost
	I0930 04:16:59.661372    6614 start.go:83] releasing machines lock for "embed-certs-846000", held for 32.574041ms
	W0930 04:16:59.661398    6614 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:16:59.661561    6614 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:16:59.661577    6614 start.go:729] Will try again in 5 seconds ...
	I0930 04:17:04.663728    6614 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:04.664235    6614 start.go:364] duration metric: took 395.833µs to acquireMachinesLock for "embed-certs-846000"
	I0930 04:17:04.664362    6614 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:17:04.664383    6614 fix.go:54] fixHost starting: 
	I0930 04:17:04.665196    6614 fix.go:112] recreateIfNeeded on embed-certs-846000: state=Stopped err=<nil>
	W0930 04:17:04.665228    6614 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:17:04.671822    6614 out.go:177] * Restarting existing qemu2 VM for "embed-certs-846000" ...
	I0930 04:17:04.676667    6614 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:04.676838    6614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8c:97:7c:d8:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/embed-certs-846000/disk.qcow2
	I0930 04:17:04.686708    6614 main.go:141] libmachine: STDOUT: 
	I0930 04:17:04.686756    6614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:04.686823    6614 fix.go:56] duration metric: took 22.442333ms for fixHost
	I0930 04:17:04.686904    6614 start.go:83] releasing machines lock for "embed-certs-846000", held for 22.581708ms
	W0930 04:17:04.687098    6614 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:04.699642    6614 out.go:201] 
	W0930 04:17:04.703797    6614 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:04.703841    6614 out.go:270] * 
	* 
	W0930 04:17:04.705963    6614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:17:04.716609    6614 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (50.781708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-846000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (33.818333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-846000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.69075ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (33.690166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-846000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (30.586625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1: exit status 83 (45.939459ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-846000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:04.990463    6635 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:04.990661    6635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:04.990668    6635 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:04.990671    6635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:04.990829    6635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:04.991066    6635 out.go:352] Setting JSON to false
	I0930 04:17:04.991074    6635 mustload.go:65] Loading cluster: embed-certs-846000
	I0930 04:17:04.991298    6635 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:04.994646    6635 out.go:177] * The control-plane node embed-certs-846000 host is not running: state=Stopped
	I0930 04:17:04.998675    6635 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-846000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (30.663292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (29.524167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.651741083s)

                                                
                                                
-- stdout --
	* [newest-cni-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-576000" primary control-plane node in "newest-cni-576000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-576000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:05.313412    6655 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:05.313552    6655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:05.313555    6655 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:05.313558    6655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:05.313709    6655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:05.314758    6655 out.go:352] Setting JSON to false
	I0930 04:17:05.330884    6655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4588,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:17:05.330957    6655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:17:05.334807    6655 out.go:177] * [newest-cni-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:17:05.341639    6655 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:17:05.341713    6655 notify.go:220] Checking for updates...
	I0930 04:17:05.350585    6655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:17:05.353648    6655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:17:05.356706    6655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:17:05.359678    6655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:17:05.362624    6655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:17:05.365936    6655 config.go:182] Loaded profile config "default-k8s-diff-port-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:05.365996    6655 config.go:182] Loaded profile config "multinode-711000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:05.366046    6655 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:17:05.370624    6655 out.go:177] * Using the qemu2 driver based on user configuration
	I0930 04:17:05.377656    6655 start.go:297] selected driver: qemu2
	I0930 04:17:05.377662    6655 start.go:901] validating driver "qemu2" against <nil>
	I0930 04:17:05.377669    6655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:17:05.379784    6655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0930 04:17:05.379824    6655 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0930 04:17:05.384580    6655 out.go:177] * Automatically selected the socket_vmnet network
	I0930 04:17:05.391780    6655 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 04:17:05.391804    6655 cni.go:84] Creating CNI manager for ""
	I0930 04:17:05.391829    6655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:17:05.391840    6655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 04:17:05.391864    6655 start.go:340] cluster config:
	{Name:newest-cni-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:17:05.395663    6655 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:17:05.403673    6655 out.go:177] * Starting "newest-cni-576000" primary control-plane node in "newest-cni-576000" cluster
	I0930 04:17:05.406661    6655 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:17:05.406679    6655 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:17:05.406689    6655 cache.go:56] Caching tarball of preloaded images
	I0930 04:17:05.406769    6655 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:17:05.406776    6655 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:17:05.406840    6655 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/newest-cni-576000/config.json ...
	I0930 04:17:05.406853    6655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/newest-cni-576000/config.json: {Name:mk3440ceda6cb04c24f0efed5eb8351331bc95ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 04:17:05.407099    6655 start.go:360] acquireMachinesLock for newest-cni-576000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:07.057372    6655 start.go:364] duration metric: took 1.650207208s to acquireMachinesLock for "newest-cni-576000"
	I0930 04:17:07.057524    6655 start.go:93] Provisioning new machine with config: &{Name:newest-cni-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:17:07.057813    6655 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:17:07.067158    6655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:17:07.117689    6655 start.go:159] libmachine.API.Create for "newest-cni-576000" (driver="qemu2")
	I0930 04:17:07.117743    6655 client.go:168] LocalClient.Create starting
	I0930 04:17:07.117849    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:17:07.117908    6655 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:07.117926    6655 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:07.118004    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:17:07.118075    6655 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:07.118090    6655 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:07.118679    6655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:17:07.292669    6655 main.go:141] libmachine: Creating SSH key...
	I0930 04:17:07.394135    6655 main.go:141] libmachine: Creating Disk image...
	I0930 04:17:07.394142    6655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:17:07.394318    6655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:07.403992    6655 main.go:141] libmachine: STDOUT: 
	I0930 04:17:07.404014    6655 main.go:141] libmachine: STDERR: 
	I0930 04:17:07.404093    6655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2 +20000M
	I0930 04:17:07.413621    6655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:17:07.413647    6655 main.go:141] libmachine: STDERR: 
	I0930 04:17:07.413669    6655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:07.413674    6655 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:17:07.413686    6655 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:07.413717    6655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ae:38:35:ea:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:07.415718    6655 main.go:141] libmachine: STDOUT: 
	I0930 04:17:07.415733    6655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:07.415753    6655 client.go:171] duration metric: took 298.008459ms to LocalClient.Create
	I0930 04:17:09.417927    6655 start.go:128] duration metric: took 2.36012225s to createHost
	I0930 04:17:09.417995    6655 start.go:83] releasing machines lock for "newest-cni-576000", held for 2.360614s
	W0930 04:17:09.418050    6655 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:09.428373    6655 out.go:177] * Deleting "newest-cni-576000" in qemu2 ...
	W0930 04:17:09.463799    6655 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:09.463833    6655 start.go:729] Will try again in 5 seconds ...
	I0930 04:17:14.464100    6655 start.go:360] acquireMachinesLock for newest-cni-576000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:14.464489    6655 start.go:364] duration metric: took 308.041µs to acquireMachinesLock for "newest-cni-576000"
	I0930 04:17:14.464612    6655 start.go:93] Provisioning new machine with config: &{Name:newest-cni-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 04:17:14.464889    6655 start.go:125] createHost starting for "" (driver="qemu2")
	I0930 04:17:14.475521    6655 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 04:17:14.525527    6655 start.go:159] libmachine.API.Create for "newest-cni-576000" (driver="qemu2")
	I0930 04:17:14.525586    6655 client.go:168] LocalClient.Create starting
	I0930 04:17:14.525695    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/ca.pem
	I0930 04:17:14.525758    6655 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:14.525776    6655 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:14.525834    6655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19734-1406/.minikube/certs/cert.pem
	I0930 04:17:14.525879    6655 main.go:141] libmachine: Decoding PEM data...
	I0930 04:17:14.525896    6655 main.go:141] libmachine: Parsing certificate...
	I0930 04:17:14.526490    6655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0930 04:17:14.698187    6655 main.go:141] libmachine: Creating SSH key...
	I0930 04:17:14.855288    6655 main.go:141] libmachine: Creating Disk image...
	I0930 04:17:14.855295    6655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0930 04:17:14.855552    6655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:14.865495    6655 main.go:141] libmachine: STDOUT: 
	I0930 04:17:14.865519    6655 main.go:141] libmachine: STDERR: 
	I0930 04:17:14.865594    6655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2 +20000M
	I0930 04:17:14.873555    6655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0930 04:17:14.873570    6655 main.go:141] libmachine: STDERR: 
	I0930 04:17:14.873580    6655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:14.873586    6655 main.go:141] libmachine: Starting QEMU VM...
	I0930 04:17:14.873594    6655 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:14.873628    6655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d3:40:f6:3d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:14.875254    6655 main.go:141] libmachine: STDOUT: 
	I0930 04:17:14.875268    6655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:14.875288    6655 client.go:171] duration metric: took 349.6955ms to LocalClient.Create
	I0930 04:17:16.876526    6655 start.go:128] duration metric: took 2.41162925s to createHost
	I0930 04:17:16.876613    6655 start.go:83] releasing machines lock for "newest-cni-576000", held for 2.412144333s
	W0930 04:17:16.876920    6655 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:16.894549    6655 out.go:201] 
	W0930 04:17:16.907614    6655 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:16.907645    6655 out.go:270] * 
	* 
	W0930 04:17:16.910104    6655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:17:16.921679    6655 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (58.068375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-497000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-497000 create -f testdata/busybox.yaml: exit status 1 (30.537125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-497000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (34.702417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (34.328708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-497000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-497000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-497000 describe deploy/metrics-server -n kube-system: exit status 1 (27.932459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-497000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (30.884916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.227004541s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-497000" primary control-plane node in "default-k8s-diff-port-497000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-497000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-497000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:10.763279    6699 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:10.763392    6699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:10.763396    6699 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:10.763398    6699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:10.763513    6699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:10.764528    6699 out.go:352] Setting JSON to false
	I0930 04:17:10.780638    6699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4593,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:17:10.780715    6699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:17:10.785419    6699 out.go:177] * [default-k8s-diff-port-497000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:17:10.792594    6699 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:17:10.792654    6699 notify.go:220] Checking for updates...
	I0930 04:17:10.799620    6699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:17:10.802613    6699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:17:10.805576    6699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:17:10.808586    6699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:17:10.811614    6699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:17:10.813477    6699 config.go:182] Loaded profile config "default-k8s-diff-port-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:10.813738    6699 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:17:10.818583    6699 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:17:10.824478    6699 start.go:297] selected driver: qemu2
	I0930 04:17:10.824485    6699 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:17:10.824545    6699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:17:10.826903    6699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 04:17:10.826928    6699 cni.go:84] Creating CNI manager for ""
	I0930 04:17:10.826961    6699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:17:10.826993    6699 start.go:340] cluster config:
	{Name:default-k8s-diff-port-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-497000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:17:10.830626    6699 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:17:10.838624    6699 out.go:177] * Starting "default-k8s-diff-port-497000" primary control-plane node in "default-k8s-diff-port-497000" cluster
	I0930 04:17:10.843565    6699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:17:10.843583    6699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:17:10.843595    6699 cache.go:56] Caching tarball of preloaded images
	I0930 04:17:10.843676    6699 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:17:10.843682    6699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:17:10.843749    6699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/default-k8s-diff-port-497000/config.json ...
	I0930 04:17:10.844324    6699 start.go:360] acquireMachinesLock for default-k8s-diff-port-497000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:10.844361    6699 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "default-k8s-diff-port-497000"
	I0930 04:17:10.844370    6699 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:17:10.844376    6699 fix.go:54] fixHost starting: 
	I0930 04:17:10.844511    6699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-497000: state=Stopped err=<nil>
	W0930 04:17:10.844520    6699 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:17:10.848417    6699 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-497000" ...
	I0930 04:17:10.856568    6699 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:10.856603    6699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f9:fb:61:47:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:17:10.858795    6699 main.go:141] libmachine: STDOUT: 
	I0930 04:17:10.858811    6699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:10.858844    6699 fix.go:56] duration metric: took 14.467875ms for fixHost
	I0930 04:17:10.858850    6699 start.go:83] releasing machines lock for "default-k8s-diff-port-497000", held for 14.484167ms
	W0930 04:17:10.858857    6699 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:10.858894    6699 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:10.858899    6699 start.go:729] Will try again in 5 seconds ...
	I0930 04:17:15.861038    6699 start.go:360] acquireMachinesLock for default-k8s-diff-port-497000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:16.876790    6699 start.go:364] duration metric: took 1.015643708s to acquireMachinesLock for "default-k8s-diff-port-497000"
	I0930 04:17:16.876937    6699 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:17:16.876959    6699 fix.go:54] fixHost starting: 
	I0930 04:17:16.877768    6699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-497000: state=Stopped err=<nil>
	W0930 04:17:16.877795    6699 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:17:16.903500    6699 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-497000" ...
	I0930 04:17:16.910532    6699 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:16.910752    6699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f9:fb:61:47:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/default-k8s-diff-port-497000/disk.qcow2
	I0930 04:17:16.922064    6699 main.go:141] libmachine: STDOUT: 
	I0930 04:17:16.922121    6699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:16.922206    6699 fix.go:56] duration metric: took 45.250083ms for fixHost
	I0930 04:17:16.922232    6699 start.go:83] releasing machines lock for "default-k8s-diff-port-497000", held for 45.402833ms
	W0930 04:17:16.922445    6699 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-497000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-497000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:16.937548    6699 out.go:201] 
	W0930 04:17:16.941653    6699 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:16.941700    6699 out.go:270] * 
	* 
	W0930 04:17:16.943316    6699 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:17:16.953488    6699 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-497000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (43.711125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-497000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (35.57525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-497000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.651917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-497000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-497000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (31.120291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-497000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (30.3465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-497000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-497000 --alsologtostderr -v=1: exit status 83 (41.163375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-497000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-497000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:17.199408    6732 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:17.199573    6732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:17.199576    6732 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:17.199578    6732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:17.199714    6732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:17.199954    6732 out.go:352] Setting JSON to false
	I0930 04:17:17.199965    6732 mustload.go:65] Loading cluster: default-k8s-diff-port-497000
	I0930 04:17:17.200191    6732 config.go:182] Loaded profile config "default-k8s-diff-port-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:17.203628    6732 out.go:177] * The control-plane node default-k8s-diff-port-497000 host is not running: state=Stopped
	I0930 04:17:17.207574    6732 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-497000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-497000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (30.044875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (29.08675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-497000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186766917s)

                                                
                                                
-- stdout --
	* [newest-cni-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-576000" primary control-plane node in "newest-cni-576000" cluster
	* Restarting existing qemu2 VM for "newest-cni-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:20.523614    6767 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:20.523725    6767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:20.523728    6767 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:20.523731    6767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:20.523845    6767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:20.524850    6767 out.go:352] Setting JSON to false
	I0930 04:17:20.540966    6767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4603,"bootTime":1727690437,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 04:17:20.541030    6767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 04:17:20.546445    6767 out.go:177] * [newest-cni-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 04:17:20.553459    6767 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 04:17:20.553527    6767 notify.go:220] Checking for updates...
	I0930 04:17:20.561435    6767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 04:17:20.564408    6767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 04:17:20.567443    6767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 04:17:20.570403    6767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 04:17:20.573385    6767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 04:17:20.576647    6767 config.go:182] Loaded profile config "newest-cni-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:20.576920    6767 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 04:17:20.580386    6767 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 04:17:20.587370    6767 start.go:297] selected driver: qemu2
	I0930 04:17:20.587375    6767 start.go:901] validating driver "qemu2" against &{Name:newest-cni-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:17:20.587435    6767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 04:17:20.589748    6767 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0930 04:17:20.589780    6767 cni.go:84] Creating CNI manager for ""
	I0930 04:17:20.589801    6767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 04:17:20.589822    6767 start.go:340] cluster config:
	{Name:newest-cni-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-576000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 04:17:20.593436    6767 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 04:17:20.601239    6767 out.go:177] * Starting "newest-cni-576000" primary control-plane node in "newest-cni-576000" cluster
	I0930 04:17:20.605420    6767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 04:17:20.605434    6767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 04:17:20.605442    6767 cache.go:56] Caching tarball of preloaded images
	I0930 04:17:20.605497    6767 preload.go:172] Found /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 04:17:20.605503    6767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 04:17:20.605568    6767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/newest-cni-576000/config.json ...
	I0930 04:17:20.606124    6767 start.go:360] acquireMachinesLock for newest-cni-576000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:20.606163    6767 start.go:364] duration metric: took 31.791µs to acquireMachinesLock for "newest-cni-576000"
	I0930 04:17:20.606172    6767 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:17:20.606178    6767 fix.go:54] fixHost starting: 
	I0930 04:17:20.606314    6767 fix.go:112] recreateIfNeeded on newest-cni-576000: state=Stopped err=<nil>
	W0930 04:17:20.606323    6767 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:17:20.610399    6767 out.go:177] * Restarting existing qemu2 VM for "newest-cni-576000" ...
	I0930 04:17:20.618373    6767 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:20.618415    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d3:40:f6:3d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:20.620628    6767 main.go:141] libmachine: STDOUT: 
	I0930 04:17:20.620650    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:20.620682    6767 fix.go:56] duration metric: took 14.505083ms for fixHost
	I0930 04:17:20.620687    6767 start.go:83] releasing machines lock for "newest-cni-576000", held for 14.519667ms
	W0930 04:17:20.620694    6767 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:20.620766    6767 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:20.620771    6767 start.go:729] Will try again in 5 seconds ...
	I0930 04:17:25.622836    6767 start.go:360] acquireMachinesLock for newest-cni-576000: {Name:mk6ecfc7d8126da9fa2597e515ffeee9de676ee7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 04:17:25.623281    6767 start.go:364] duration metric: took 353.459µs to acquireMachinesLock for "newest-cni-576000"
	I0930 04:17:25.623448    6767 start.go:96] Skipping create...Using existing machine configuration
	I0930 04:17:25.623469    6767 fix.go:54] fixHost starting: 
	I0930 04:17:25.624216    6767 fix.go:112] recreateIfNeeded on newest-cni-576000: state=Stopped err=<nil>
	W0930 04:17:25.624244    6767 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 04:17:25.633576    6767 out.go:177] * Restarting existing qemu2 VM for "newest-cni-576000" ...
	I0930 04:17:25.636625    6767 qemu.go:418] Using hvf for hardware acceleration
	I0930 04:17:25.636814    6767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d3:40:f6:3d:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19734-1406/.minikube/machines/newest-cni-576000/disk.qcow2
	I0930 04:17:25.646697    6767 main.go:141] libmachine: STDOUT: 
	I0930 04:17:25.646762    6767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0930 04:17:25.646868    6767 fix.go:56] duration metric: took 23.398125ms for fixHost
	I0930 04:17:25.646890    6767 start.go:83] releasing machines lock for "newest-cni-576000", held for 23.571417ms
	W0930 04:17:25.647119    6767 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0930 04:17:25.654573    6767 out.go:201] 
	W0930 04:17:25.657626    6767 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0930 04:17:25.657650    6767 out.go:270] * 
	* 
	W0930 04:17:25.660077    6767 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 04:17:25.668890    6767 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-576000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (72.804667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-576000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (30.337542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-576000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-576000 --alsologtostderr -v=1: exit status 83 (41.795375ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-576000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-576000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 04:17:25.856207    6781 out.go:345] Setting OutFile to fd 1 ...
	I0930 04:17:25.856369    6781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:25.856372    6781 out.go:358] Setting ErrFile to fd 2...
	I0930 04:17:25.856375    6781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 04:17:25.856487    6781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 04:17:25.856714    6781 out.go:352] Setting JSON to false
	I0930 04:17:25.856723    6781 mustload.go:65] Loading cluster: newest-cni-576000
	I0930 04:17:25.856930    6781 config.go:182] Loaded profile config "newest-cni-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 04:17:25.860416    6781 out.go:177] * The control-plane node newest-cni-576000 host is not running: state=Stopped
	I0930 04:17:25.864292    6781 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-576000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-576000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (30.680125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-576000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (30.456291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 17.95
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 255.82
29 TestAddons/serial/Volcano 38.83
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 16.64
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.29
38 TestAddons/parallel/CSI 50.72
39 TestAddons/parallel/Headlamp 15.65
40 TestAddons/parallel/CloudSpanner 5.21
41 TestAddons/parallel/LocalPath 9.56
42 TestAddons/parallel/NvidiaDevicePlugin 5.16
43 TestAddons/parallel/Yakd 10.26
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 11.37
55 TestErrorSpam/setup 36.56
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.23
58 TestErrorSpam/pause 0.69
59 TestErrorSpam/unpause 0.63
60 TestErrorSpam/stop 64.26
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 49.27
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 37.05
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.04
71 TestFunctional/serial/CacheCmd/cache/add_remote 9.14
72 TestFunctional/serial/CacheCmd/cache/add_local 1.73
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 1.94
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.12
80 TestFunctional/serial/ExtraConfig 63.59
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.66
83 TestFunctional/serial/LogsFileCmd 0.64
84 TestFunctional/serial/InvalidService 3.97
86 TestFunctional/parallel/ConfigCmd 0.22
87 TestFunctional/parallel/DashboardCmd 7.39
88 TestFunctional/parallel/DryRun 0.23
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.23
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 25.51
98 TestFunctional/parallel/SSHCmd 0.12
99 TestFunctional/parallel/CpCmd 0.45
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.37
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
110 TestFunctional/parallel/License 1.45
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.2
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.8
118 TestFunctional/parallel/ImageCommands/Setup 1.79
119 TestFunctional/parallel/DockerEnv/bash 0.27
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.44
126 TestFunctional/parallel/ProfileCmd/profile_list 0.14
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.13
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.23
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
145 TestFunctional/parallel/ServiceCmd/List 0.32
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
148 TestFunctional/parallel/ServiceCmd/Format 0.1
149 TestFunctional/parallel/ServiceCmd/URL 0.1
150 TestFunctional/parallel/MountCmd/any-port 9.87
151 TestFunctional/parallel/MountCmd/specific-port 1.89
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.74
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 227.97
160 TestMultiControlPlane/serial/DeployApp 9.54
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 58.24
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.29
165 TestMultiControlPlane/serial/CopyFile 4.21
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 26.22
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 1.96
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 5.01
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.42
276 TestNoKubernetes/serial/Stop 3.67
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
293 TestStartStop/group/old-k8s-version/serial/Stop 3.42
296 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
306 TestStartStop/group/no-preload/serial/Stop 2.13
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
315 TestStartStop/group/embed-certs/serial/Stop 2.14
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.22
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
335 TestStartStop/group/newest-cni/serial/Stop 3.31
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 03:20:43.307019    1929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0930 03:20:43.307347    1929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-388000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-388000: exit status 85 (96.13875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT |          |
	|         | -p download-only-388000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 03:20:04
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 03:20:04.345621    1930 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:20:04.346035    1930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:04.346040    1930 out.go:358] Setting ErrFile to fd 2...
	I0930 03:20:04.346042    1930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:04.346233    1930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	W0930 03:20:04.346362    1930 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19734-1406/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19734-1406/.minikube/config/config.json: no such file or directory
	I0930 03:20:04.347863    1930 out.go:352] Setting JSON to true
	I0930 03:20:04.365094    1930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1167,"bootTime":1727690437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:20:04.365205    1930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:20:04.370452    1930 out.go:97] [download-only-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:20:04.370625    1930 notify.go:220] Checking for updates...
	W0930 03:20:04.370656    1930 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 03:20:04.373364    1930 out.go:169] MINIKUBE_LOCATION=19734
	I0930 03:20:04.376356    1930 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:20:04.381386    1930 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:20:04.382727    1930 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:20:04.386391    1930 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	W0930 03:20:04.392379    1930 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 03:20:04.392625    1930 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:20:04.397312    1930 out.go:97] Using the qemu2 driver based on user configuration
	I0930 03:20:04.397328    1930 start.go:297] selected driver: qemu2
	I0930 03:20:04.397340    1930 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:20:04.397397    1930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:20:04.400377    1930 out.go:169] Automatically selected the socket_vmnet network
	I0930 03:20:04.405963    1930 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0930 03:20:04.406068    1930 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 03:20:04.406123    1930 cni.go:84] Creating CNI manager for ""
	I0930 03:20:04.406161    1930 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 03:20:04.406217    1930 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:20:04.411278    1930 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:20:04.415318    1930 out.go:97] Downloading VM boot image ...
	I0930 03:20:04.415333    1930 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0930 03:20:22.429216    1930 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0930 03:20:22.429241    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:22.703058    1930 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 03:20:22.703152    1930 cache.go:56] Caching tarball of preloaded images
	I0930 03:20:22.704002    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:22.711250    1930 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 03:20:22.711277    1930 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:23.298344    1930 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 03:20:41.454787    1930 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:41.454962    1930 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:42.151938    1930 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0930 03:20:42.152148    1930 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/download-only-388000/config.json ...
	I0930 03:20:42.152165    1930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/download-only-388000/config.json: {Name:mk7b46bb34296f896fabb72562914322ff711b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 03:20:42.152429    1930 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 03:20:42.152624    1930 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0930 03:20:43.257296    1930 out.go:193] 
	W0930 03:20:43.262222    1930 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19734-1406/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0 0x1096c16c0] Decompressors:map[bz2:0x14000811160 gz:0x14000811168 tar:0x14000811110 tar.bz2:0x14000811120 tar.gz:0x14000811130 tar.xz:0x14000811140 tar.zst:0x14000811150 tbz2:0x14000811120 tgz:0x14000811130 txz:0x14000811140 tzst:0x14000811150 xz:0x14000811170 zip:0x14000811180 zst:0x14000811178] Getters:map[file:0x140003e87f0 http:0x140009040a0 https:0x140009040f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0930 03:20:43.262249    1930 out_reason.go:110] 
	W0930 03:20:43.270236    1930 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 03:20:43.274092    1930 out.go:193] 
	
	
	* The control-plane node download-only-388000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-388000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-388000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (17.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-691000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-691000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (17.949178375s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (17.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 03:21:01.605633    1929 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 03:21:01.605692    1929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-691000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-691000: exit status 85 (79.986333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT |                     |
	|         | -p download-only-388000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT | 30 Sep 24 03:20 PDT |
	| delete  | -p download-only-388000        | download-only-388000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT | 30 Sep 24 03:20 PDT |
	| start   | -o=json --download-only        | download-only-691000 | jenkins | v1.34.0 | 30 Sep 24 03:20 PDT |                     |
	|         | -p download-only-691000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 03:20:43
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 03:20:43.684827    1963 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:20:43.684969    1963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:43.684972    1963 out.go:358] Setting ErrFile to fd 2...
	I0930 03:20:43.684974    1963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:20:43.685105    1963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:20:43.686168    1963 out.go:352] Setting JSON to true
	I0930 03:20:43.702506    1963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1206,"bootTime":1727690437,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:20:43.702580    1963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:20:43.707821    1963 out.go:97] [download-only-691000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:20:43.707912    1963 notify.go:220] Checking for updates...
	I0930 03:20:43.711752    1963 out.go:169] MINIKUBE_LOCATION=19734
	I0930 03:20:43.718758    1963 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:20:43.725736    1963 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:20:43.733768    1963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:20:43.740725    1963 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	W0930 03:20:43.746529    1963 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 03:20:43.746821    1963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:20:43.750786    1963 out.go:97] Using the qemu2 driver based on user configuration
	I0930 03:20:43.750795    1963 start.go:297] selected driver: qemu2
	I0930 03:20:43.750799    1963 start.go:901] validating driver "qemu2" against <nil>
	I0930 03:20:43.750869    1963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 03:20:43.754632    1963 out.go:169] Automatically selected the socket_vmnet network
	I0930 03:20:43.761107    1963 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0930 03:20:43.761268    1963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 03:20:43.761288    1963 cni.go:84] Creating CNI manager for ""
	I0930 03:20:43.761315    1963 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 03:20:43.761323    1963 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 03:20:43.761365    1963 start.go:340] cluster config:
	{Name:download-only-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:20:43.765276    1963 iso.go:125] acquiring lock: {Name:mk0aae8f7f133a44b195b0eefe4c7ae573d14aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 03:20:43.769628    1963 out.go:97] Starting "download-only-691000" primary control-plane node in "download-only-691000" cluster
	I0930 03:20:43.769634    1963 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:20:44.361903    1963 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 03:20:44.361996    1963 cache.go:56] Caching tarball of preloaded images
	I0930 03:20:44.363015    1963 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 03:20:44.367855    1963 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0930 03:20:44.367920    1963 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0930 03:20:44.928348    1963 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19734-1406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-691000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-691000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-691000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-584000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-584000: exit status 85 (58.252166ms)

                                                
                                                
-- stdout --
	* Profile "addons-584000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-584000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-584000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-584000: exit status 85 (62.216791ms)

                                                
                                                
-- stdout --
	* Profile "addons-584000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-584000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (255.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-584000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-584000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (4m15.821198542s)
--- PASS: TestAddons/Setup (255.82s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.83s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 6.716875ms
addons_test.go:835: volcano-scheduler stabilized in 6.734417ms
addons_test.go:843: volcano-admission stabilized in 6.794875ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-qb2cv" [e1eefaa0-35b8-457e-944d-4aa34b0a5cbe] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004634416s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-4vw5l" [1b4f66fc-93a8-4c7f-a52a-b9e060ac822c] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003705416s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-qhw8h" [05f1a7fd-8dd5-4104-975f-ed93fa884c7b] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004159458s
addons_test.go:870: (dbg) Run:  kubectl --context addons-584000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-584000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-584000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ec7e4f73-64f0-4045-b812-ce47160c8ec4] Pending
helpers_test.go:344: "test-job-nginx-0" [ec7e4f73-64f0-4045-b812-ce47160c8ec4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ec7e4f73-64f0-4045-b812-ce47160c8ec4] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004671209s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-584000 addons disable volcano --alsologtostderr -v=1: (10.569985417s)
--- PASS: TestAddons/serial/Volcano (38.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-584000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-584000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-584000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-584000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-584000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [58d4bd6f-6434-4a89-bf75-66bad96c047d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [58d4bd6f-6434-4a89-bf75-66bad96c047d] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.010132458s
I0930 03:35:28.446580    1929 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-584000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-584000 addons disable ingress --alsologtostderr -v=1: (7.2285785s)
--- PASS: TestAddons/parallel/Ingress (16.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8vdvd" [623093c2-f338-4428-8f42-e3edad1a510e] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009708792s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-584000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-584000: (5.294788958s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.356125ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-rmskq" [e7606862-ddb6-4102-aa0d-f00de94eaba5] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012566083s
addons_test.go:413: (dbg) Run:  kubectl --context addons-584000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 03:34:49.673126    1929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 03:34:49.675582    1929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 03:34:49.675589    1929 kapi.go:107] duration metric: took 2.48775ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.491292ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-584000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-584000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5c0587fb-e523-4b3a-8bcc-3c24079f8b81] Pending
helpers_test.go:344: "task-pv-pod" [5c0587fb-e523-4b3a-8bcc-3c24079f8b81] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5c0587fb-e523-4b3a-8bcc-3c24079f8b81] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.012493625s
addons_test.go:528: (dbg) Run:  kubectl --context addons-584000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-584000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-584000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-584000 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-584000 delete pod task-pv-pod: (1.108600625s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-584000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-584000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-584000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [96a0da39-662c-4d54-ba9b-3cc6b338b8ba] Pending
helpers_test.go:344: "task-pv-pod-restore" [96a0da39-662c-4d54-ba9b-3cc6b338b8ba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [96a0da39-662c-4d54-ba9b-3cc6b338b8ba] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008810833s
addons_test.go:570: (dbg) Run:  kubectl --context addons-584000 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-584000 delete pod task-pv-pod-restore: (1.098354s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-584000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-584000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-584000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.111686583s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-584000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-724vj" [bdb0683b-ae9a-44be-99fd-0b541f025308] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-724vj" [bdb0683b-ae9a-44be-99fd-0b541f025308] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004479917s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-584000 addons disable headlamp --alsologtostderr -v=1: (5.295893625s)
--- PASS: TestAddons/parallel/Headlamp (15.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-5hqjq" [0f765baa-150c-438d-92fe-a71156c19f33] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014514083s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-584000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-584000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-584000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [84bd13bd-055e-4699-911c-5dd43c193b6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [84bd13bd-055e-4699-911c-5dd43c193b6f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [84bd13bd-055e-4699-911c-5dd43c193b6f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003110667s
addons_test.go:938: (dbg) Run:  kubectl --context addons-584000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 ssh "cat /opt/local-path-provisioner/pvc-7a8edbd9-cb85-4491-8c48-da2806ec0d22_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-584000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-584000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kvn2s" [cf218953-21b1-4281-8276-c5c77d83e6eb] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005920291s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-584000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2f5sl" [22fe01e0-a047-40e7-9f2f-2df1c3c14c44] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006647959s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-584000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-584000 addons disable yakd --alsologtostderr -v=1: (5.257060084s)
--- PASS: TestAddons/parallel/Yakd (10.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-584000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-584000: (12.211254834s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-584000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-584000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-584000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.37s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0930 04:02:21.968798    1929 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 04:02:21.969011    1929 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0930 04:02:24.375783    1929 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0930 04:02:24.376056    1929 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0930 04:02:24.376093    1929 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit
I0930 04:02:24.896110    1929 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40 0x1064f2d40] Decompressors:map[bz2:0x14000121600 gz:0x14000121608 tar:0x140001215a0 tar.bz2:0x140001215b0 tar.gz:0x140001215c0 tar.xz:0x140001215d0 tar.zst:0x140001215e0 tbz2:0x140001215b0 tgz:0x140001215c0 txz:0x140001215d0 tzst:0x140001215e0 xz:0x14000121610 zip:0x14000121620 zst:0x14000121618] Getters:map[file:0x1400154e200 http:0x1400066d860 https:0x1400066d8b0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0930 04:02:24.896268    1929 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate2824951066/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.37s)

                                                
                                    
x
+
TestErrorSpam/setup (36.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-930000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-930000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 --driver=qemu2 : (36.556048917s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (36.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop: (12.207326542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop: (26.029810458s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-930000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-930000 stop: (26.022598042s)
--- PASS: TestErrorSpam/stop (64.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19734-1406/.minikube/files/etc/test/nested/copy/1929/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-853000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.26848125s)
--- PASS: TestFunctional/serial/StartWithProxy (49.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 03:38:26.710052    1929 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-853000 --alsologtostderr -v=8: (37.045737167s)
functional_test.go:663: soft start took 37.04622825s for "functional-853000" cluster.
I0930 03:39:03.755090    1929 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (37.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-853000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:3.1: (3.518755125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:3.3: (3.370158208s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 cache add registry.k8s.io/pause:latest: (2.255129916s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3505732878/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache add minikube-local-cache-test:functional-853000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 cache add minikube-local-cache-test:functional-853000: (1.398321625s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache delete minikube-local-cache-test:functional-853000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-853000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.802291ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 cache reload: (1.874292791s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 kubectl -- --context functional-853000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 kubectl -- --context functional-853000 get pods: (1.937229041s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-853000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-853000 get pods: (1.120571s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (63.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0930 03:40:18.225857    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.233636    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.247076    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.270550    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.314001    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.397567    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.561126    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:18.883660    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:19.527402    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:20.811126    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:40:23.373026    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-853000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m3.588195875s)
functional_test.go:761: restart took 1m3.588292791s for "functional-853000" cluster.
I0930 03:40:23.652918    1929 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (63.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-853000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3963907616/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-853000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-853000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-853000: exit status 115 (147.717625ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30154 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-853000 delete -f testdata/invalidsvc.yaml
E0930 03:40:28.496542    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 config get cpus: exit status 14 (29.430833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 config get cpus: exit status 14 (29.043167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-853000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-853000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3344: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-853000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.794042ms)

                                                
                                                
-- stdout --
	* [functional-853000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:41:18.666151    3329 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:41:18.666300    3329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:18.666303    3329 out.go:358] Setting ErrFile to fd 2...
	I0930 03:41:18.666305    3329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:18.666443    3329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:41:18.667520    3329 out.go:352] Setting JSON to false
	I0930 03:41:18.683914    3329 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2441,"bootTime":1727690437,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:41:18.683995    3329 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:41:18.688737    3329 out.go:177] * [functional-853000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0930 03:41:18.695635    3329 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:41:18.695684    3329 notify.go:220] Checking for updates...
	I0930 03:41:18.702605    3329 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:41:18.705643    3329 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:41:18.708711    3329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:41:18.710110    3329 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:41:18.713604    3329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:41:18.716914    3329 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:41:18.717179    3329 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:41:18.721511    3329 out.go:177] * Using the qemu2 driver based on existing profile
	I0930 03:41:18.728672    3329 start.go:297] selected driver: qemu2
	I0930 03:41:18.728679    3329 start.go:901] validating driver "qemu2" against &{Name:functional-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:41:18.728749    3329 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:41:18.735656    3329 out.go:201] 
	W0930 03:41:18.739651    3329 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 03:41:18.743684    3329 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-853000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-853000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.695917ms)

                                                
                                                
-- stdout --
	* [functional-853000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 03:41:06.433102    3269 out.go:345] Setting OutFile to fd 1 ...
	I0930 03:41:06.433211    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:06.433214    3269 out.go:358] Setting ErrFile to fd 2...
	I0930 03:41:06.433216    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 03:41:06.433342    3269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
	I0930 03:41:06.434894    3269 out.go:352] Setting JSON to false
	I0930 03:41:06.454212    3269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2429,"bootTime":1727690437,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0930 03:41:06.454318    3269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0930 03:41:06.457534    3269 out.go:177] * [functional-853000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0930 03:41:06.465502    3269 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 03:41:06.465581    3269 notify.go:220] Checking for updates...
	I0930 03:41:06.474472    3269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	I0930 03:41:06.478512    3269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0930 03:41:06.481553    3269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 03:41:06.484522    3269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	I0930 03:41:06.487517    3269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 03:41:06.490839    3269 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 03:41:06.491086    3269 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 03:41:06.495443    3269 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0930 03:41:06.502534    3269 start.go:297] selected driver: qemu2
	I0930 03:41:06.502540    3269 start.go:901] validating driver "qemu2" against &{Name:functional-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 03:41:06.502607    3269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 03:41:06.508551    3269 out.go:201] 
	W0930 03:41:06.511508    3269 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 03:41:06.514876    3269 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ad66bf63-0b7e-4c2e-86f8-32b0293c63d3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005377666s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-853000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-853000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-853000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fbda0710-018c-4ec4-b142-6b39e834d62a] Pending
helpers_test.go:344: "sp-pod" [fbda0710-018c-4ec4-b142-6b39e834d62a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fbda0710-018c-4ec4-b142-6b39e834d62a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009717625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-853000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-853000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [248f6bb8-5947-423c-8daf-51ff8be514d9] Pending
helpers_test.go:344: "sp-pod" [248f6bb8-5947-423c-8daf-51ff8be514d9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [248f6bb8-5947-423c-8daf-51ff8be514d9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010194833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-853000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh -n functional-853000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cp functional-853000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1487993414/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh -n functional-853000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh -n functional-853000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1929/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /etc/test/nested/copy/1929/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1929.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /etc/ssl/certs/1929.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1929.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /usr/share/ca-certificates/1929.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/19292.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /etc/ssl/certs/19292.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/19292.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /usr/share/ca-certificates/19292.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-853000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "sudo systemctl is-active crio": exit status 1 (58.456917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.452099459s)
--- PASS: TestFunctional/parallel/License (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-853000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-853000
docker.io/kicbase/echo-server:functional-853000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-853000 image ls --format short --alsologtostderr:
I0930 03:41:21.438434    3365 out.go:345] Setting OutFile to fd 1 ...
I0930 03:41:21.438786    3365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.438790    3365 out.go:358] Setting ErrFile to fd 2...
I0930 03:41:21.438793    3365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.438922    3365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:41:21.439370    3365 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.439432    3365 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.440250    3365 ssh_runner.go:195] Run: systemctl --version
I0930 03:41:21.440259    3365 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/functional-853000/id_rsa Username:docker}
I0930 03:41:21.464381    3365 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-853000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-853000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 6e8672ddd037e | 193MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-853000 | fe1ff37f2ade3 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-853000 image ls --format table --alsologtostderr:
I0930 03:41:21.590857    3369 out.go:345] Setting OutFile to fd 1 ...
I0930 03:41:21.591002    3369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.591005    3369 out.go:358] Setting ErrFile to fd 2...
I0930 03:41:21.591008    3369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.591123    3369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:41:21.591562    3369 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.591622    3369 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.592450    3369 ssh_runner.go:195] Run: systemctl --version
I0930 03:41:21.592459    3369 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/functional-853000/id_rsa Username:docker}
I0930 03:41:21.617034    3369 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-853000 image ls --format json --alsologtostderr:
[{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"fe1ff37f2ade34dee0093c349a7676bbd50071878363b067b3118cb8a2395a3
d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-853000"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags"
:["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-853000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-853000 image ls --format json --alsologtostderr:
I0930 03:41:21.515054    3367 out.go:345] Setting OutFile to fd 1 ...
I0930 03:41:21.515210    3367 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.515214    3367 out.go:358] Setting ErrFile to fd 2...
I0930 03:41:21.515216    3367 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.515364    3367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:41:21.515803    3367 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.515871    3367 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.516760    3367 ssh_runner.go:195] Run: systemctl --version
I0930 03:41:21.516770    3367 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/functional-853000/id_rsa Username:docker}
I0930 03:41:21.543431    3367 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-853000 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-853000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: fe1ff37f2ade34dee0093c349a7676bbd50071878363b067b3118cb8a2395a3d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-853000
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-853000 image ls --format yaml --alsologtostderr:
I0930 03:41:21.661476    3371 out.go:345] Setting OutFile to fd 1 ...
I0930 03:41:21.661629    3371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.661633    3371 out.go:358] Setting ErrFile to fd 2...
I0930 03:41:21.661635    3371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.661777    3371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:41:21.662212    3371 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.662272    3371 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.663096    3371 ssh_runner.go:195] Run: systemctl --version
I0930 03:41:21.663105    3371 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/functional-853000/id_rsa Username:docker}
I0930 03:41:21.688241    3371 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh pgrep buildkitd: exit status 1 (65.449917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image build -t localhost/my-image:functional-853000 testdata/build --alsologtostderr
2024/09/30 03:41:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 image build -t localhost/my-image:functional-853000 testdata/build --alsologtostderr: (4.6620065s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-853000 image build -t localhost/my-image:functional-853000 testdata/build --alsologtostderr:
I0930 03:41:21.795368    3375 out.go:345] Setting OutFile to fd 1 ...
I0930 03:41:21.795613    3375 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.795617    3375 out.go:358] Setting ErrFile to fd 2...
I0930 03:41:21.795619    3375 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 03:41:21.795751    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19734-1406/.minikube/bin
I0930 03:41:21.796222    3375 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.796898    3375 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 03:41:21.797839    3375 ssh_runner.go:195] Run: systemctl --version
I0930 03:41:21.797849    3375 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19734-1406/.minikube/machines/functional-853000/id_rsa Username:docker}
I0930 03:41:21.823077    3375 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3013323048.tar
I0930 03:41:21.823154    3375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 03:41:21.830873    3375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3013323048.tar
I0930 03:41:21.835664    3375 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3013323048.tar: stat -c "%s %y" /var/lib/minikube/build/build.3013323048.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3013323048.tar': No such file or directory
I0930 03:41:21.835685    3375 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3013323048.tar --> /var/lib/minikube/build/build.3013323048.tar (3072 bytes)
I0930 03:41:21.852073    3375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3013323048
I0930 03:41:21.858693    3375 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3013323048 -xf /var/lib/minikube/build/build.3013323048.tar
I0930 03:41:21.864970    3375 docker.go:360] Building image: /var/lib/minikube/build/build.3013323048
I0930 03:41:21.865046    3375 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-853000 /var/lib/minikube/build/build.3013323048
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 1.6s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a7702a5ce610af3f5488bfb37f09cc58a41f8dafc731a2de3b202b867a320139 done
#8 naming to localhost/my-image:functional-853000 done
#8 DONE 0.0s
I0930 03:41:26.413357    3375 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-853000 /var/lib/minikube/build/build.3013323048: (4.548417167s)
I0930 03:41:26.413424    3375 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3013323048
I0930 03:41:26.417448    3375 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3013323048.tar
I0930 03:41:26.422444    3375 build_images.go:217] Built localhost/my-image:functional-853000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3013323048.tar
I0930 03:41:26.422462    3375 build_images.go:133] succeeded building to: functional-853000
I0930 03:41:26.422464    3375 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.777196958s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-853000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-853000 docker-env) && out/minikube-darwin-arm64 status -p functional-853000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-853000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image load --daemon kicbase/echo-server:functional-853000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image load --daemon kicbase/echo-server:functional-853000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "99.888875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.491458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "174.24025ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.645458ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-853000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image load --daemon kicbase/echo-server:functional-853000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3146: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-853000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [38c02265-491e-4213-af05-bd21a0d760bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [38c02265-491e-4213-af05-bd21a0d760bf] Running
E0930 03:40:38.740144    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003779084s
I0930 03:40:42.471877    1929 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image save kicbase/echo-server:functional-853000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image rm kicbase/echo-server:functional-853000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-853000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 image save --daemon kicbase/echo-server:functional-853000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-853000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-853000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.91.216 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0930 03:40:42.532817    1929 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0930 03:40:42.572310    1929 config.go:182] Loaded profile config "functional-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-853000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-853000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-853000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-6b7ln" [ae4198e9-671d-44d8-b7a9-61c6eb8d80d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-6b7ln" [ae4198e9-671d-44d8-b7a9-61c6eb8d80d5] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010785958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service list -o json
functional_test.go:1494: Took "293.645959ms" to run "out/minikube-darwin-arm64 -p functional-853000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30437
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30437
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727692866526806000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727692866526806000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727692866526806000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001/test-1727692866526806000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p": (1.290261583s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 10:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 10:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 10:41 test-1727692866526806000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh cat /mount-9p/test-1727692866526806000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-853000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [19cd2428-d35f-43aa-87c7-938efd6e59db] Pending
helpers_test.go:344: "busybox-mount" [19cd2428-d35f-43aa-87c7-938efd6e59db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [19cd2428-d35f-43aa-87c7-938efd6e59db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [19cd2428-d35f-43aa-87c7-938efd6e59db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.005182958s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-853000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4087139157/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2631883095/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.625875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 03:41:16.452232    1929 retry.go:31] will retry after 629.642056ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (88.8835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 03:41:17.173124    1929 retry.go:31] will retry after 681.491144ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2631883095/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "sudo umount -f /mount-9p": exit status 1 (58.904541ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-853000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2631883095/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1: exit status 1 (82.681667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 03:41:18.365328    1929 retry.go:31] will retry after 587.083902ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1: exit status 1 (58.352042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 03:41:19.012992    1929 retry.go:31] will retry after 384.910482ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1: exit status 1 (55.719459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 03:41:19.455400    1929 retry.go:31] will retry after 1.331547686s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-853000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-853000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-853000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2982313990/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-853000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-853000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-853000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (227.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-937000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0930 03:41:40.185060    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:43:02.106540    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-937000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m47.787381625s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (227.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- rollout status deployment/busybox
E0930 03:45:18.216909    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-937000 -- rollout status deployment/busybox: (7.841507291s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-2kkbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-qdn5t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-tdkhj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-2kkbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-qdn5t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-tdkhj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-2kkbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-qdn5t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-tdkhj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-2kkbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-2kkbb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-qdn5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-qdn5t -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-tdkhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-937000 -- exec busybox-7dff88458-tdkhj -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-937000 -v=7 --alsologtostderr
E0930 03:45:32.368557    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.376189    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.389299    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.412647    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.455038    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.538409    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:32.701789    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:33.023778    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:33.666894    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:34.950257    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:37.513640    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:42.636983    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:45.945700    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/addons-584000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:45:52.879231    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
E0930 03:46:13.360409    1929 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19734-1406/.minikube/profiles/functional-853000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-937000 -v=7 --alsologtostderr: (58.031144292s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-937000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp testdata/cp-test.txt ha-937000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1360246818/001/cp-test_ha-937000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000:/home/docker/cp-test.txt ha-937000-m02:/home/docker/cp-test_ha-937000_ha-937000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test_ha-937000_ha-937000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000:/home/docker/cp-test.txt ha-937000-m03:/home/docker/cp-test_ha-937000_ha-937000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test_ha-937000_ha-937000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000:/home/docker/cp-test.txt ha-937000-m04:/home/docker/cp-test_ha-937000_ha-937000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test_ha-937000_ha-937000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp testdata/cp-test.txt ha-937000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1360246818/001/cp-test_ha-937000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m02:/home/docker/cp-test.txt ha-937000:/home/docker/cp-test_ha-937000-m02_ha-937000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test_ha-937000-m02_ha-937000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m02:/home/docker/cp-test.txt ha-937000-m03:/home/docker/cp-test_ha-937000-m02_ha-937000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test_ha-937000-m02_ha-937000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m02:/home/docker/cp-test.txt ha-937000-m04:/home/docker/cp-test_ha-937000-m02_ha-937000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test_ha-937000-m02_ha-937000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp testdata/cp-test.txt ha-937000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1360246818/001/cp-test_ha-937000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m03:/home/docker/cp-test.txt ha-937000:/home/docker/cp-test_ha-937000-m03_ha-937000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test_ha-937000-m03_ha-937000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m03:/home/docker/cp-test.txt ha-937000-m02:/home/docker/cp-test_ha-937000-m03_ha-937000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test_ha-937000-m03_ha-937000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m03:/home/docker/cp-test.txt ha-937000-m04:/home/docker/cp-test_ha-937000-m03_ha-937000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test_ha-937000-m03_ha-937000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp testdata/cp-test.txt ha-937000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1360246818/001/cp-test_ha-937000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m04:/home/docker/cp-test.txt ha-937000:/home/docker/cp-test_ha-937000-m04_ha-937000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000 "sudo cat /home/docker/cp-test_ha-937000-m04_ha-937000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m04:/home/docker/cp-test.txt ha-937000-m02:/home/docker/cp-test_ha-937000-m04_ha-937000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m02 "sudo cat /home/docker/cp-test_ha-937000-m04_ha-937000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 cp ha-937000-m04:/home/docker/cp-test.txt ha-937000-m03:/home/docker/cp-test_ha-937000-m04_ha-937000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-937000 ssh -n ha-937000-m03 "sudo cat /home/docker/cp-test_ha-937000-m04_ha-937000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (26.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (26.219627s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (26.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-879000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-879000 --output=json --user=testUser: (1.962748917s)
--- PASS: TestJSONOutput/stop/Command (1.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-355000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-355000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.727ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc374f99-0001-41b9-a885-bd8e05e6a937","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-355000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1b8859d-03d6-4bd8-a354-53468c51cd0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"6442df8a-1cfa-473c-9897-365b35885770","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig"}}
	{"specversion":"1.0","id":"d0d02090-ba67-4dba-9261-ca3e281c710e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c63f27da-987e-45eb-8b71-c3c20dc857ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"51dd44f3-787b-4098-9c66-f6ab2fc55b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube"}}
	{"specversion":"1.0","id":"04b29aa8-ac36-49f9-a1b5-144d3131ce78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2336ddd9-8f96-4f10-811e-75beb158b62a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-355000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-953000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.7025ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19734-1406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19734-1406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-953000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-953000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.609625ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-953000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-953000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.667280292s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.751126167s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-953000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-953000: (3.670787208s)
--- PASS: TestNoKubernetes/serial/Stop (3.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-953000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-953000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.602958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-953000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-953000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-312000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-153000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-153000 --alsologtostderr -v=3: (3.422315541s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-153000 -n old-k8s-version-153000: exit status 7 (35.50425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-153000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-616000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-616000 --alsologtostderr -v=3: (2.126011916s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (52.432167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-616000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-846000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-846000 --alsologtostderr -v=3: (2.143163208s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (59.226833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-846000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-497000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-497000 --alsologtostderr -v=3: (3.2210765s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-497000 -n default-k8s-diff-port-497000: exit status 7 (57.985959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-497000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-576000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-576000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-576000 --alsologtostderr -v=3: (3.312836375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-576000 -n newest-cni-576000: exit status 7 (58.828958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-576000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-962000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-962000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-962000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-962000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962000"

                                                
                                                
----------------------- debugLogs end: cilium-962000 [took: 2.231889958s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-962000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-962000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-459000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-459000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard